

If I didn’t have an argument with a pro-“AI” (it’s not AI, I refuse to call it that) person in my fucking post history about just this fucking issue, maybe I’d be more willing to agree with you here. But no, the people who keep trying to get me to use so-called “AI” seem to believe that it can reason, or, at least, that it can be convinced to reason. So yes, I will use this article to “hate on AI”, because the “AI” lovers seem to believe that chatGPT should be capable of something like this. When clearly, fucking obviously, it isn’t. It isn’t those of us who hate so-called “AI” that are trying to claim that these text predictors can reason, it’s the people who like them and want to force me to use them that make this claim.
Yeah, this is why I’ve (mostly) stopped engaging about so-called “AI”. Because the responses I get are complete shit, and the whole topic makes me furious.
I actually do know a little bit about machine learning in general, because my thesis advisor was tangentially involved with machine learning research.
I’m also not a fan of intellectual property rights. I know I called LLM’s something like “hallucinating plagiarism machines” at some point, which I probably shouldn’t have, because it does make it sound like I care about them “stealing” intellectual property. That’s not my issue with them, but from that phrase it sounds like it could be.
But, anyway, I shouldn’t have responded to your comment, I know I shouldn’t have. Every single interaction on the internet involving so-called “AI” makes me more certain I need to stop having online interactions regarding so-called “AI”. This one is no different.
I’m quite done with this conversation, you almost certainly are too, so I’ll just say, I hope you have a pleasant day, and hopefully next time we see each other on hexbear we can have a more pleasant interaction