Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)B
Posts
0
Comments
165
Joined
3 yr. ago

  • People do not consume and process data the same way an AI model does. Therefore it doesn’t matter about how humans learn, because AIs don’t learn. This isn’t repurposing work, it’s using work in a way the copyright holder doesn’t allow, just like copyright holders are allowed to prohibit commercial use.

  • Copyright violations is stealing

  • Yes it is. Moralize it all you want, but it’s still theft

  • Copying copyright protected data is theft AND stealing

    Edit: this also applies to my stance on piracy, which I don’t engage in for the same reason. It’s theft

  • I could say the same about you, considering I’ve watched you peddle false information for months about this subject.

    AI learns differently than humans. That isn’t a fact up for debate. That’s one of the few objective truths around this industry.

  • Backed by technical facts.

    AIs fundamentally process information differently than humans. That’s not up for debate.

  • That does nothing to solve the problem of data being used without consent to train the models. It doesn’t matter if the model is FOSS if it stole all the data it trained on.

  • AI does not “read books” and it’s completely disingenuous to compare them to humans that way.

  • It is stealing data. In order to train on it they have to store the data. That’s a copyright violation. There’s no way to interpret it as not stealing data.

  • No

    Why are you entitled to use everyone else’s work? It should be secured in law that licensing applies to training data to avoid frivolous discussions like this. Then it’s an entirely opt-in solution, which works in the benefit of everyone except the people stealing data.

    Output doesn’t matter since it’s pretty well settled it’s not derivative work (as much as I disagree with that statement).

  • You don’t need to prove a financial difference. They are fundamentally different systems that function in different ways. They cannot be compared 1:1 and laws cannot be applied as a 1:1. New regulations need to be added around AI use of copyrighted material.

  • Yes you would need permission. Just because you’re a hobbyist doesn’t mean you’re exempt from needing to follow the rules.

    As soon as it goes beyond a completely offline, personal, non-replicatible project, it should be subject to the same copyright laws.

    If you purely create a data agnostic AI model and share the code, there’s no problem, as you’re not profiting off of the training data. If you create an AI model that’s available for others to use, then you’d need to have the licensing rights to all of the training data.

  • Training is theft imo. You have to scrape and store the training data, which amounts to copyright violation based on replication. It’s an incredibly simple concept. The model isn’t the problem here, the training data is.

  • Twitter is literally being dismantled. It’s a shell of what it previously was, and does not work for organizing protests and other counter-culture due to the billionaires buying and taking it apart piece by piece.

    The whole point of Twitter was reach. None of the other platforms have the same reach.

    Also the other platforms are doing their best to follow twitters lead. So yes, it’s being dismantled.

  • Also an “AI” is not human, and should not be regulated as such

  • And why is that a bad thing?

    Why are you entitled to other peoples work, just because “it’s hard to find data”?

  • You can make these models just fine using licensed data. So can any hobbyist.

    You just can’t steal other people’s creations to make your models.

  • Too bad

    Why do they have free reign to store and use copyrighted material as training data? AIs don’t learn as a human would, and comparisons can’t be made between the learning processes.