I think of an LLM as extraordinarily lossy compression. All the training data is essentially encoded in the model. You can get an approximation of the data back out again with the right input.
I don't think it's any less reliable that random blogs on the web, and I don't have to wade through SEO tripe either.
This is just the typical modern misunderstanding of statistics.