OmegaMouse@pawb.social to 196@lemmy.blahaj.zone · 2 months agoYou got it! rulepawb.socialimagemessage-square27fedilinkarrow-up1421arrow-down13
arrow-up1418arrow-down1imageYou got it! rulepawb.socialOmegaMouse@pawb.social to 196@lemmy.blahaj.zone · 2 months agomessage-square27fedilink
minus-squareTʜᴇʀᴀᴘʏGⒶʀʏ⁽ᵗʰᵉʸ‘ᵗʰᵉᵐ⁾@lemmy.blahaj.zonelinkfedilinkarrow-up8·edit-22 months agoI had a 20 questions app I on my iPod touch like 15yrs ago that guessed it correctly every time Edit: I just tried it with GPT4 (via perplexity) and it managed to guess carrot on question #10, but it took 22 questions to get human
minus-squareTar_Alcaran@sh.itjust.workslinkfedilinkarrow-up9·2 months agoBecause 20q is basically a big decision tree. And LLMs don’t make decisions, they generate output based on what other output looks like.
minus-squareI Cast Fist@programming.devlinkfedilinkarrow-up3·2 months agoSo, no llm has been trained on Akinator. For shame, big tech, for shame.
I had a 20 questions app I on my iPod touch like 15yrs ago that guessed it correctly every time
Edit: I just tried it with GPT4 (via perplexity) and it managed to guess carrot on question #10, but it took 22 questions to get human
Because 20q is basically a big decision tree. And LLMs don’t make decisions, they generate output based on what other output looks like.
So, no llm has been trained on Akinator. For shame, big tech, for shame.