Of course in any other application, keeping instructions and data separate is very important. Like an SQL injection attack is when you're able to sneak instructions in where data is supposed to go, and then you can just delete the entire database, if you want. But with LLMs the distinction doesn't exist.
Literally what it was doing to me a couple of hours ago.
I wanted a sample QR code image to toss into a mock-up, and the first thing that popped up was the "AI answer" telling me about different places I could get a QR code, and I'm like bro, I'm already on a search engine, there's an 'images' tab right there, which contains what I'm looking for.
I saw a story recently where a guy spent some time with a customer service chatbot, and ended up convincing it to give him 80% off, and then ordered like $6000 of stuff.
LLMs just don't produce reliable/predictable output, it's much easier for the user to get them to go off the rails.
It can be both, yes. I just think it's less about Epstein stuff and more about other stuff. Profits, for sure, and geopolitics. And it can serve as a distraction from all the other shit that's going on...ICE and their ethnic cleansing campaign, for example. They're just doing everything at once, so it's near impossible for a busy person to keep up.
I was listening to a podcast about dead internet theory the other day, and they talked about some studies that were finding the the internet in general is near 50% bot activity now. I'm sure twitter is much worse than the average.
This is complete insanity. They clearly have no idea how to implement effective safeguards.