deleted by creator
Its because of all the people saying that LLMs can reason and think and the human brain works just like an LLM and… some other ridiculous claim.
This shows some limitations on LLMs.
Human brains lose to computerized chess all the time, though. So I guess this is a win for AI tech bros?
Why the special qualifier of “computerized” chess? Do humans regularly lose to Atari’s at chess? LLMs are computerized too.
It depends on the human.
I’d wager children would lose quite often.
I meant a specialized application, like the Atari one that beat the LLM.
But humans not trained (made) for chess would make stupid mistakes too
Why are so many people mad when it’s pointed out that the shitty chatbots are just shitty chatbots.
Now apply this to like, everything else ever.
Machine designed to convincingly fake human internet conversation sucks at ____________!
ChatGPT can’t make a rug as well as a 300 year old loom.
I knew there would be these kinds of comments making this obvious point. This is just a demo of how these language models are not going to achieve the “General” part of AGI. It’s going to take a new paradigm
Too many people forget that specialized, purpose-driven software is often if more effective and efficient. LLMs and other AI are nice when you don’t have a properly defined spec or a flexible algorithm but you pay, literally, for the convenience.
40 year old machine designed to play chess*
I think people in the replies acting fake surprised are missing the point.
it is important news, because many people see LLMs as black boxes of superintelligence (almost as if that’s what they’re being marketed as!)
you and i know that’s bullshit, but the students asking chatgpt to solve their math homework instead of using wolfram alpha doesn’t.
so yes, it is important to demonstrate that this “artificial intelligence” is so much not an intelligence that it’s getting beaten by 1979 software on 1977 hardware
This is useful for dispelling the hype around ChatGPT and for demonstrating the limits of general purpose LLMs.
But that’s about it. This is not a “win” for old school game engines vs new ones. Stockfish uses deep reinforcement learning and is one of the strongest chess engines in the world.
EDIT: what would be actually interesting would be to see if GPT could be fine-tuned to play chess. Which is something many people have been doing: https://scholar.google.com/scholar?hl=en&q=finetune+gpt+chess
In other news, my toaster absolutely wrecked my T.V. at making toast.
A chess-specific algorithm beat a language model at chess. Shocking!
Try training a chess model. Actually I think it’s already been done, machines have been consistently better at chess than humans for a while now.
It’s AI, not AGI. LLM’s are good at generating language just like chess engines are good at chess. ChatGPT doesn’t have the capability to keep track of all the pieces on the board.
LLMs would be great as an interface to more specialized machine learning programs in a combined platform. We need AI to perform tasks humans aren’t capable of instead of replacing them.
They’re literally selling to credulous investors that AGI is around the corner, when this and to a lesser extent Large Action Models is the only viable product they’ve got. It’s just a demo of how far they are from their promises
Is there a link where I could see them making these claims myself? This is something I’ve only heard from AI critics, but never directly from the AI companies themselves. I wouldn’t be surprised if they did, but I’ve just never seen them say it outright.
“We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies” https://blog.samaltman.com/reflections
“We fully intend that Gemini will be the very first AGI” https://venturebeat.com/ai/at-google-i-o-sergey-brin-makes-surprise-appearance-and-declares-google-will-build-the-first-agi/
“If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years” -Elon Musk https://www.reuters.com/technology/teslas-musk-predicts-ai-will-be-smarter-than-smartest-human-next-year-2024-04-08/
Thanks.
Well, I don’t think OpenAI knows how to build AGI, so that’s false. Otherwise, Sam’s statement there is technically correct, but kind of misleading - he talks about AGI and then, in the next sentence, switches back to AI.
Sergey’s claim that they will achieve AGI before 2030 could turn out to be true, but again, he couldn’t possibly know that. I’m sure it’s their intention, but that’s different from reality.
Elon’s statement doesn’t even make sense. I’ve never heard anyone define AGI like that. A thirteen-year-old with an IQ of 85 is generally intelligent. Being smarter than the smartest human definitely qualifies as AGI, but that’s just a weird bar. General intelligence isn’t about how smart something is - it’s about whether it can apply its intelligence across multiple unrelated fields.
How did alpha go do?
I’m shocked! — shocked to find that LLMs aren’t superhuman intelligences that will soon enslave us all. Other things they’re not good at:
- Summarizing news articles. Instead of an actual summary they’ll shorten the text by just leaving things out, without any understanding of which parts are important.
- Answering questions about anything controversial. Based on subtle hints in the wording of your question they’ll reflect your own biases back at you.
- Answering questions about well-known facts. Seemingly at random when your question isn’t phrased exactly the right way they’ll start hallucinating and make up plausible bullshit in place of actual answers.
- Writing a letter. They’ll use the wrong tone, use language that is bland and generic to a degree that makes it almost offensive, and if you care about quality the whole thing will need so much re-writing that it’s quicker to do it yourself from the start.
- Telling jokes. They don’t really get humour. Their jokes tend to have things that superficially look as if they should be punchlines but aren’t funny at all.
- Writing computer code. Correcting their mistakes is even more laborious in computer languages. Most of the time they’re almost as bad at it as they are at playing chess.
Still they are amazingly clever in some ways and pretty good for coming up with random ideas when you’ve got writer’s block or something.
Although the chatbot had been given a “baseline board” to learn the game and identify pieces, it kept mixing up rooks and bishops, misread moves, and “repeatedly lost track” of where its pieces were. To make matters worse, as Caruso explained, ChatGPT also blamed Atari’s icons for being “too abstract to recognize” — but when he switched the game over to standard notation, it didn’t perform any better.
For an hour-and-a-half, ChatGPT “made enough blunders to get laughed out of a 3rd grade chess club” while insisting over and over again that it would win “if we just started over,” Caruso noted. (And yes, it’s kind of creepy that the chatbot apparently referred to itself and the human it was interfacing with as “we.”)
It’s fucking insane it couldn’t keep track of a board…
And it’s concerning how confident it is that it will work, because the idiots asking it stuff will believe it. It’ll keep failing and keep saying next time will work, because it’s built to maximize engagement.
Spatial reasoning has always been a weakness of LLMs. Other symptoms include the inability to count and no concept of object permanence.
Yeah, but it’s chess…
The LLM doesn’t have to imagine a board, if you feed it the rules of chess and the dimensions of the board it should be able to “play in its head”.
For a human to have that kind of working memory would be a genius level intellect and years of practice at the game.
But human working memory is shit compared to virtually every other animal. This and processing speed is supposed to be AI’s main draw.
It doesn’t have a head like that. It places things in a conceptual space, not a numerical space. To it, a number is just an adjective, like a colour. It is learning to play chess by looking for language-like patterns in the game’s transcript. It is never attempting to model the contents of the board in it’s “mind”.
LLMs can be good at openings. Not because it is thinking through the rules or planning strategies, but because opening moves are likely in most general training data from various sources. It’s copying the most probable reaction to your move, based on lots of documentation. This can of course break down when you stray from a typical play style, as it has less to choose from in the options of probability, and only a few moves in there won’t be any more since there’s a huge number of possible moves.
I.e., there’s no calculations involved. When you play a LLM at chess, you’re playing a list of common moves in history.
An even simpler example would be to tell the LLM that its last move was illegal. Even knowing the rules you just told it, it will agree and take it back. This comes from being trained to give satisfying replies to a human prompt.
The LLM doesn’t have to imagine a board, if you feed it the rules of chess and the dimensions of the board it should be able to “play in its head”.
That assumes it knows how to play chess. It doesn’t. It know how to have a passable conversation. Asking it to play chess is like putting bread into a blender and being confused when it doesn’t toast.
But human working memory is shit compared to virtually every other animal. This and processing speed is supposed to be AI’s main draw.
Processing speed and memory in the context of writing. Give it a bunch of chess boards or chess notation and it has no idea which it needs to remember, nonetheless where/how to move. If you want an AI to play chess, you train it on chess gameplay, not books and Reddit comments. AI isn’t a general use tool.
if you feed it the rules of chess and the dimensions of the board it should be able to “play in its head”.
You’d save a lot of time typing, if you spent a little more reading…
You seem to be missing what I’m saying. Maybe a biological comparison would help:
An octopus is extrmely smart, moreso than even most mammels. It can solve basic logic puzzles, learn and navigate complex spaces, and plan and execute different and adaptive stratgies to humt prey. In spite of this, it can’t talk or write. No matter what you do, training it, trying to teach it, or even trying to develop an octopus specific language, it will not be able to understand language. This isn’t because the octopus isn’t smart, its because its evolved for the purpose of hunting food and hiding from predators. Its brain has developed to understand how physics works and how to recognize patterns, but it just doesn’t have the ability to understand how to socialize, and nothing can change that short of rewiring its brain. Hand it a letter and it’ll try and catch fish with it rather than even considering trying to read it.
AI is almost the reverse of this. An LLM has “evolved” (been trained) to write stuff that sounds good, but has little emphasis on understanding what it writes. The “understanding” is more about patterns in writting rather than underlying logic. This means that if the LLM encounters something that isn’t standard language, it will “flail” and start trying to apply what it knows, regardless of how well it applies. In the chess example, this might be, for example, just trying to respond with the most common move, regardless of if it can be played. Ultimately, no matter what you input into it, an LLM is trying to find and replicate patterns in language, not underlying logic.