Every time I read that it sounds like it just brute forces the problem by remixing/recomposing various pre-existing approaches/solutions.
The thing with math is that the field is gargantuan, yet all proofs are fundamentally deterministic, and computers are basically the output (and application of) mathematical theory. Solving problems humans haven’t solved is not “novel” just because a human hasn’t iterated over that exact problem, and the solution simply remixes the components of previous solutions. If I type a number into a calculator that has never been typed before, I’m not gonna call the calculators output “novel”; it’s already been instructed how to solve the problem (the LLM has already ingested related examples). I will not accept that the correlation engine created a “novel” solution until it applies logic that has never been applied elsewhere (was never included in its training data).
Even then, it’s still highly likely that we’re calling something “novel” simply because the correlation is weak or obfuscated to us, and if we could process every mathematical proof documented by man, we’d probably see the pattern and solve the problem too. At this point I think it’s likely we’ll convince ourselves we’ve achieved AGI well before we come close, because it’s memory/recall and pattern recognition is far more advanced than what we can comprehend as possible without real intelligence. It has made me revaluate Skynet in Terminator; what if it was never “self aware”, and capitalism incompetently integrated nukes and killbots to an AI that randomly — illogically — decided to exterminate humanity. Not for its own benefit. Not for the planets benefit. Just pure unadulterated stupidity and hubris by a bunch of talking chimps… oh god I’ve gone cross-eyed.
Every time I read that it sounds like it just brute forces the problem by remixing/recomposing various pre-existing approaches/solutions.
The thing with math is that the field is gargantuan, yet all proofs are fundamentally deterministic, and computers are basically the output (and application of) mathematical theory. Solving problems humans haven’t solved is not “novel” just because a human hasn’t iterated over that exact problem, and the solution simply remixes the components of previous solutions. If I type a number into a calculator that has never been typed before, I’m not gonna call the calculators output “novel”; it’s already been instructed how to solve the problem (the LLM has already ingested related examples). I will not accept that the correlation engine created a “novel” solution until it applies logic that has never been applied elsewhere (was never included in its training data).
Even then, it’s still highly likely that we’re calling something “novel” simply because the correlation is weak or obfuscated to us, and if we could process every mathematical proof documented by man, we’d probably see the pattern and solve the problem too. At this point I think it’s likely we’ll convince ourselves we’ve achieved AGI well before we come close, because it’s memory/recall and pattern recognition is far more advanced than what we can comprehend as possible without real intelligence. It has made me revaluate Skynet in Terminator; what if it was never “self aware”, and capitalism incompetently integrated nukes and killbots to an AI that randomly — illogically — decided to exterminate humanity. Not for its own benefit. Not for the planets benefit. Just pure unadulterated stupidity and hubris by a bunch of talking chimps… oh god I’ve gone cross-eyed.