Given how Reddit now makes money by selling its data to AI companies, I was wondering how the situation is for the fediverse. Typically you can block AI crawlers using robot.txt (Verge reported about it recently: https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders). But this only works per domain/server, and the fediverse is about many different servers interacting with each other.
So if my kbin/lemmy or Mastodon server blocks OpenAI’s crawler via robot.txt, what does that even mean when people on other servers that don’t block this crawler are boosting me on Mastodon, or if I reply to their posts. I suspect unless all the servers I interact with block the same AI crawlers, I cannot prevent my posts from being used as AI training data?
I don’t object to my content being used for training. I do object to Reddit profiting from that data. It’s the reason I basically don’t participate on Reddit anymore. Anything I post in the fediverse I am aware I am offering it up for free to be crawled and used as seen fit as long as it is not monetized without my consent. I don’t consider model training to be monetization.
Fair reason for not participating in Reddit. I would argue though that while model training is not monetization per se, with this “AI as a platform” rationale promoted by OpenAI, Google and others, there is very direct link between model training and monetization. Monetization without your consent - especially when these companies refuse to reveal the sources of their training data. Wouldn’t be surprised if GPT-4 or Gemini have been trained on your Fediverse posts, or will be in the near future
Agreed but it bugs me that I need to pay Reddit to not see ads and on top of that they get paid for the content we produce. The fediverse is a better model.
We’re sick of closed walled-garden monoliths like Reddit! Let’s move to an open federated protocol where anyone can participate and the APIs can’t be locked down!
…wait, not like that!
Yeah. This is what you signed up for when you joined the Fediverse, the ActivityPub protocol broadcasts your content to any other servers that ask for it. And just generally, that’s how the Internet works. You’re putting up a public billboard and expecting to be able to control who gets to look at it. That’s not going to work. Even robots.txt is just a gentleman’s agreement, it’s not enforceable.
If you really want to prevent AI from training on your content with any degree of certainty you’re probably looking for a private forum of some kind that’s run by someone you trust.
I don’t expect anything, I was merely asking a question to clarify this
Well, I hope my answer clarifies it. You can’t prevent LLMs from being trained on your public posts.
You are correct. Some of the largest instances block bot traffic, but most don’t, meaning your posts have been seen by AI crawlers and will continue to be so.
Short of not participating in federation and only discussing things within a private non-federated community on a personal instance or something, I don’t think there’s a way to prevent it.
Thanks for confirming. It’s unfortunate that people who are outraged about Reddit selling their data to AI companies don’t really have an alternative in the fediverse.
I guess the best hope is for new mechanisms to control AI crawlers to emerge, so they can be blocked per user rather than per domain. Maybe https://spawning.ai will come up with something. One can hope.