For business use, laptops without powerful graphics cards have been the norm for quite some time. Do you see businesses deciding to change to desktops to accommodate the power for local models? I think it's pretty optimistic to think that laptops are going to be that powerful in the next 5 years. The advancement in chip capability has dramatically slowed, and to put them in laptops they'd need to be incredibly more power efficient as well.
I've seen this argument way to often and it is completely pointless. The argument that this will succeed because something in the past succeeded is exactly the same as arguing it will fail because something in the past failed.
If you want to draw the conclusion that they're similar enough to use history in prediction, you'll have to show that they're similar and make a case for why those similarities are relevant.
I haven't seen anyone making this argument bother with this exercise, but I have seen people that actually look at the economics discuss why they're different animals.
There is also the tech itself.
internet - connect everything together across vast distances. Obvious limitless possibilities.
smart phones (you didn't mention here but this is the other one people use for this argument most frequently) - Anything a computer can do in the palm of your hand.
llms - can do some powerful stuff like rifle through and summarize text, or generate text, or generate code... Except you can't really trust it to do any of these things accurately, and that is a fundamental aspect of how the technology works rather than something that can be fixed, so it can't be used responsibly for anything critical.
The fang companies that are in on the llm hype are still lighting money on fire in their llm endeavors so I fail to see how the point that they may be otherwise profitable is relevant.
You opened by saying that somehow, using a federated social media site naturally means someone also supports using that site to train AI. My whole point in this entire thread is that you are drawing a false conclusion, clearly, because there are plenty of people that clearly don't agree.
You just spew the same unrelated junk over and over because you can't back up your ridiculous assertion.
I don't see any point in continuing because you're clearly tripling down, but you really should actually respond to what is actually being said to you if you're going to respond at all.
It isn't about what is currently legal under the law! People can discuss how they would prefer society works, and should! This is what was happening in this thread and that's why you trying to shove your "well actually this system is federated and it's not illegal" is pointless and unwanted. You're not bringing anything to the conversation because you can't even tell what the conversation is about, apparently.
Ok? That doesn't mean that everyone has to agree that AI companies should be allowed to train on the data. Are you seriously so dense you can't distinguish between technology and social issues?
Ps: I very obviously didn't say you support rape, but drew the very obvious comparison to what you're saying. Use your head for 2 seconds.
If you understand, then you should be able to understand that your "they were dressed like they wanted it" level argument bullshit is completely unnecessary.
It doesn't matter how many pithy analogies you make. You need to recognize the difference between "I know they're scraping this website because they can" and "I don't think they should be allowed to scrape this website". You're arguing that they're incompatible when they're not.
Participating in a public forum that has no technical way of preventing data from being used by a particular class of actor does not preclude having an opinion that a particular class of actor should have rules about what data they are allowed to use.
Someone using a lightbulb isn't a hypocrite if they say it is irresponsible to use the resources to power a computer, and someone using the internet isn't a hypocrite if they say the absurdly higher resource usage of llm inference is irresponsible.
You can call people hypocrites all day if you pretend that scale isn't a concept, but you're obviously wrong.
"AI" has a massive inability (or is purposefully deceptive) to distinguish the difference between bugs, which can be fixed, and fundamental aspects of the technology that disqualify it from various applications.
I think the more likely story is that they know this can be done, know about this particular jailbreak person, can replicate their work (because they didn't so anything they hadn't done with previous models in the first place), and are straight up lying and betting the people that matter to their next investment round (scam continuation) won't catch wind.
Are you trying to say that making a series of http requests to view a website is even remotely equivalent in energy usage compared to running inference with an llm model???
For business use, laptops without powerful graphics cards have been the norm for quite some time. Do you see businesses deciding to change to desktops to accommodate the power for local models? I think it's pretty optimistic to think that laptops are going to be that powerful in the next 5 years. The advancement in chip capability has dramatically slowed, and to put them in laptops they'd need to be incredibly more power efficient as well.