It's quite bad at what we're told it's supposed to do (producing reliably correct responses), hallucinating up to 40% of the time.It's also quite bad at not doing what it's not supposed to. Meaning the "guardrails" that are supposed to prevent it from giving harmful information can usually be circumvented by rephrasing the prompt or some form of "social" engineering.And on top of all that we don't actually understand how they work in a fundamental level. We don't know how LLMs "reason" and there's every reason to assume they don't actually understand what they're saying. Any attempt to have the LLM explain its reasoning is of course for naught, as the same logic applies. It just makes up something that approximately sounds like a suitable line of reasoning.Even for comparatively trivial networks, like the ones used for written number recognition, that we can visualise entirely, it's difficult to tell how the conclusion is reached. Some neurons seem to detect certain patterns, others seem to be just noise.
You can't prevent people from believing whatever insane nonsense they wish and I don't think anyone should.We should, however, ban institutionalised religion. No one's imaginary friend should be given any legal rights or powers, let alone tax exemptions.
It is a good distro. It's just that the documentation is either outdated, inaccurate or inexistent and that the learning curve is parallel to the y-axis.The community, somewhat understandably, has limited time and willingness to explain the same things over and over, yet for some reason refuses to conclude that rigorously maintaining a wiki is their best bet to combat this situation and will ultimately be less effort than repeating the same explanations.
But doesn't that apply just as much to Fairtrade and other, similar certifications? Tony's is Fairtrade certified. Seems weird to give Fairtrade as a guide for brands not on the list but then exclude one specifically.
Oh, I did not catch that. It's entirely possible that Tony's being on the boycott list is based on outdated information. As far as I know it's just one person maintaining the page. I think you can also contact them.
It's quite bad at what we're told it's supposed to do (producing reliably correct responses), hallucinating up to 40% of the time.It's also quite bad at not doing what it's not supposed to. Meaning the "guardrails" that are supposed to prevent it from giving harmful information can usually be circumvented by rephrasing the prompt or some form of "social" engineering.And on top of all that we don't actually understand how they work in a fundamental level. We don't know how LLMs "reason" and there's every reason to assume they don't actually understand what they're saying. Any attempt to have the LLM explain its reasoning is of course for naught, as the same logic applies. It just makes up something that approximately sounds like a suitable line of reasoning.Even for comparatively trivial networks, like the ones used for written number recognition, that we can visualise entirely, it's difficult to tell how the conclusion is reached. Some neurons seem to detect certain patterns, others seem to be just noise.