Andrej Karpathy skriver, at vi bør tænke på LLM'er mere som [simulatorer] end som [væsener]:
Don't think of LLMs as entities but as simulators. For example, when exploring a topic, don't ask:
"What do you think about xyz"?
There is no "you". Next time try:
"What would be a good group of people to explore xyz? What would they say?"
The LLM can channel/simulate many perspectives but it hasn't "thought about" xyz for a while and over time and formed its own opinions in the way we're used to. If you force it via the use of "you", it will give you something by adopting a personality embedding vector implied by the statistics of its finetuning data and then simulate that. It's fine to do, but there is a lot less mystique to it than I find people naively attribute to "asking an AI".
Det er rigtig, rigtig skidt. Ikke nogen overraskelse - det er bare ubehageligt at blive konfronteret med den barske, ubehagelige virkelighed lige i smasken.
Jeg kan til gengæld kun grine af deres latterlige udtalelser om ytringsfriheden og demokratiet er udfordret i Europa
What. Det havde jeg ikke regnet med. God indsats! :)