It’s AI, not AGI. LLM’s are good at generating language just like chess engines are good at chess. ChatGPT doesn’t have the capability to keep track of all the pieces on the board.
They’re literally selling to credulous investors that AGI is around the corner, when this and to a lesser extent Large Action Models is the only viable product they’ve got. It’s just a demo of how far they are from their promises
Is there a link where I could see them making these claims myself? This is something I’ve only heard from AI critics, but never directly from the AI companies themselves. I wouldn’t be surprised if they did, but I’ve just never seen them say it outright.
“We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies” https://blog.samaltman.com/reflections
Well, I don’t think OpenAI knows how to build AGI, so that’s false. Otherwise, Sam’s statement there is technically correct, but kind of misleading - he talks about AGI and then, in the next sentence, switches back to AI.
Sergey’s claim that they will achieve AGI before 2030 could turn out to be true, but again, he couldn’t possibly know that. I’m sure it’s their intention, but that’s different from reality.
Elon’s statement doesn’t even make sense. I’ve never heard anyone define AGI like that. A thirteen-year-old with an IQ of 85 is generally intelligent. Being smarter than the smartest human definitely qualifies as AGI, but that’s just a weird bar. General intelligence isn’t about how smart something is - it’s about whether it can apply its intelligence across multiple unrelated fields.
LLMs would be great as an interface to more specialized machine learning programs in a combined platform. We need AI to perform tasks humans aren’t capable of instead of replacing them.
It’s AI, not AGI. LLM’s are good at generating language just like chess engines are good at chess. ChatGPT doesn’t have the capability to keep track of all the pieces on the board.
They’re literally selling to credulous investors that AGI is around the corner, when this and to a lesser extent Large Action Models is the only viable product they’ve got. It’s just a demo of how far they are from their promises
Is there a link where I could see them making these claims myself? This is something I’ve only heard from AI critics, but never directly from the AI companies themselves. I wouldn’t be surprised if they did, but I’ve just never seen them say it outright.
“We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies” https://blog.samaltman.com/reflections
“We fully intend that Gemini will be the very first AGI” https://venturebeat.com/ai/at-google-i-o-sergey-brin-makes-surprise-appearance-and-declares-google-will-build-the-first-agi/
“If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years” -Elon Musk https://www.reuters.com/technology/teslas-musk-predicts-ai-will-be-smarter-than-smartest-human-next-year-2024-04-08/
Thanks.
Well, I don’t think OpenAI knows how to build AGI, so that’s false. Otherwise, Sam’s statement there is technically correct, but kind of misleading - he talks about AGI and then, in the next sentence, switches back to AI.
Sergey’s claim that they will achieve AGI before 2030 could turn out to be true, but again, he couldn’t possibly know that. I’m sure it’s their intention, but that’s different from reality.
Elon’s statement doesn’t even make sense. I’ve never heard anyone define AGI like that. A thirteen-year-old with an IQ of 85 is generally intelligent. Being smarter than the smartest human definitely qualifies as AGI, but that’s just a weird bar. General intelligence isn’t about how smart something is - it’s about whether it can apply its intelligence across multiple unrelated fields.
LLMs would be great as an interface to more specialized machine learning programs in a combined platform. We need AI to perform tasks humans aren’t capable of instead of replacing them.