Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)G
Posts
0
Comments
105
Joined
7 mo. ago

  • I'd say that it's simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.

    Again, they don't know if they know the answer, they just say what's the most statistically probable thing to say given your message and their prompt.

  • You're giving way too much credit to LLMs. AIs don't "know" things, like "humans lie". They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot "lie" because they do not even understand what it is they are writing.

  • It's funny, I for one have never had to worry about going to jail after sex... Maybe it has to do with not getting friends to take turns on someone who wasn't even aware this could be a possibility, who knows.

  • You seem to be conveniently forgetting the whole part about the judge completely ignoring the 5 men and focusing on accusing the victim of lying. That has nothing to do with proving or not the guilt of the accused.