The uncertainty comes from reverse-engineering how a specific output relates to the prompt input. It uses extremely fuzzy logic to compute the answer to "What is the closest planet to the Sun?" We can't know which nodes in the neural network were triggered or in what order, so we can't precisely say how the answer was computed.
It's not an inherent obligation, but if you genuinely oppose those ideas, you'd feel compelled to argue against them if someone expressed them, or else remove yourself from the situation ASAP
I never said discussing LLMs was itself philosophical. I said that as soon as you ask the question "but does it really know?" then you are immediately entering the territory of the theory of knowledge, whether you're talking about humans, about dogs, about bees, or, yes, about AI.
I'll preface by saying I agree that AI doesn't really "know" anything and is just a randomised Chinese Room. However...
Acting like the entire history of the philosophy of knowledge is just some attempt make "knowing" seem more nuanced is extremely arrogant. The question of what knowledge is is not just relevant to the discussion of AI, but is fundamental in understanding how our own minds work. When you form arguments about how AI doesn't know things, you're basing it purely on the human experience of knowing things. But that calls into question how you can be sure you even know anything at all. We can't just take it for granted that our perceptions are a perfect example of knowledge, we have to interrogate that and see what it is that we can do that AIs can't- or worse, discover that our assumptions about knowledge, and perhaps even of our own abilities, are flawed.
Not enough information for a meaningful answer