It gives you access to information in an extremely efficient way though, before AI i was often scrolling through hundreds of forum posts to find the solution to my problems, now i just ask AI and get a straightforward answer. Its a great efficiency tool in general and increases your overall skillsets, sure it wont replace highly skilled people yet but usually those people are only very skilled in one area, AI enables people to increase their baseline of skillsets so anyone can code, write, gather information, etc.
Imo the problem lies in what big corps and governments will do with it and that will fuck us heavily.
It seems that you haven’t put your own hands on it yet, there’s so much more than just chatgpt. You should definitely try out perplexity, it made using search engines obsolete and the information is neutral, up to date and on point, at least most of the times, rarely you have to push it or give it more information, sadly its big corp and closed source but you can self host with Perplexica and local AI models running on your own, loathing it is one thing but completely ignoring is another, you shouldn’t hate what you don’t know.
How is an AI that is being used to search the web for information, that crawls through studies, forums, etc a terrible source? It basically isn’t a source at all it just gathers it from different sources. The problem lies in the alignment and sponsorification of it and i would agree if this is about chatgpt but that wasn’t the point.
It might gather information from all those sources (with or without consent), but what it returns is no more credible than a story from a granny in your local market.
ONLY if you prompt it to return links, and read the information in those links yourself, only then, you’ve read information from the source.
It has been already proven that LLMs are bad at summarising - the only thing techbros have been pushing it for. It’s bad at summarising, bad at coding, bad at math and fucking terrible at image making.
There’s a reason the output is called ‘Slop’. And rightfully so.
The output is still slop, no matter if it’s local or oligarch-owned.
It gives you access to information in an extremely efficient way though, before AI i was often scrolling through hundreds of forum posts to find the solution to my problems, now i just ask AI and get a straightforward answer. Its a great efficiency tool in general and increases your overall skillsets, sure it wont replace highly skilled people yet but usually those people are only very skilled in one area, AI enables people to increase their baseline of skillsets so anyone can code, write, gather information, etc.
Imo the problem lies in what big corps and governments will do with it and that will fuck us heavily.
LLMs provide as much information as a parrot repeating most heard words.
It’s a terrible, terrible “source” of information that will lead to an insane amount of misinformed people.
It seems that you haven’t put your own hands on it yet, there’s so much more than just chatgpt. You should definitely try out perplexity, it made using search engines obsolete and the information is neutral, up to date and on point, at least most of the times, rarely you have to push it or give it more information, sadly its big corp and closed source but you can self host with Perplexica and local AI models running on your own, loathing it is one thing but completely ignoring is another, you shouldn’t hate what you don’t know.
You are assuming too much.
There’s not much room for interpretation here.
If you think that’s wrong - you’re wrong.
How is an AI that is being used to search the web for information, that crawls through studies, forums, etc a terrible source? It basically isn’t a source at all it just gathers it from different sources. The problem lies in the alignment and sponsorification of it and i would agree if this is about chatgpt but that wasn’t the point.
It might gather information from all those sources (with or without consent), but what it returns is no more credible than a story from a granny in your local market.
ONLY if you prompt it to return links, and read the information in those links yourself, only then, you’ve read information from the source.
It has been already proven that LLMs are bad at summarising - the only thing techbros have been pushing it for. It’s bad at summarising, bad at coding, bad at math and fucking terrible at image making.
There’s a reason the output is called ‘Slop’. And rightfully so.