

The first statement is not even wholly true. While training does take more, executing the model (called “inference”) takes much, much more power than non-AI search algorithms, or really any traditional computational algorithm besides bogosort.
Big Tech weren’t doing the best they possibly could transitioning to green energy, but they were making substantial progress before LLMs exploded on the scene because the value proposition was there: traditional algorithms were efficient enough that the PR gain from doing the green energy transition offset the cost.
Now Big Tech have for some reason decided that LLMs represent the biggest game of gambling ever. The first to find the breakthrough to AGI will win it all and completely take over all IT markets, so they need to consume as much as they can get away with to maximize the probability that that breakthrough happens by their engineers.
The technological progress LLMs represent has come to completion. They’re a technological dead end. They have no practical application because of hallucinations, and hallucinations are baked into the very core of how they work. Any further progress will come from experts learning from the successes and failures of LLMs, abandoning them, and building entirely new AI systems.
AI as a general field is not a dread end, and it will continue to improve. But we’re nowhere near the AGI that tech CEOs are promising LLMs are so close to.