Yes, that is true. The last 10-20% are usually the hardest. I think LLM’s only become slightly better with each generation at first. My prediction is, there will be another big step forward towards AGI when these models can learn from interacting with themself. And this also might result in a potentially dangerous AGI.
LLMs are neuronal networks! Yes they are trained with meaningful text to predict the following word, but they are still NN. And after they are trained with with human generated text they can also be further trained with other sources and in another way. Question is how an interaction between LLMs should be valuated. When does and LLM find one or a series of good words? I have not described this and I am also not sure what would be a good way to evaluate that.
Anyway I am sad now. I was looking forward to have some interesting discussions about LLMs. But all I get is down votes and comments like yours that tell me I am an idiot without telling me why.
Maybe I did not articulated my thoughts well enough. But it feels like people want to misinterpret what I’m saying.