I think the simple fact is that some of the people in this thread don't understand is that the people they're asking to vet the code don't know how.
They may mean that the people who can vet code should do so before making a fuss about the AI written portions of it, but I don't know that most of the people in opposition to their comments understand that context.
I haven't coded anything since the 90's. I know HTML and basic CSS and that's it. I wouldn't have known where to start without guides to explain what commands in Linux do and how they work together. Growing up with various versions of Windows and DOS, I'd still consider myself a novice computer user. I absolutely do know how to go into command line and make things happen. But I wouldn't know where to start to make a program. It's not part of my skill set.
Most users are like that. They engage with only parts of a thing. It's why so many people these days are computer illiterate due to the rise of smartphone usage and apps for everything.
It'd be like me asking a frequent flyer to inspect a plane engine for damage or figure out why the landing gear doesn't retract. A lot of people wouldn't know where to start.
I fully agree that other coders on the internet who frequent places like GitHub and make it a point to vet the code of other devs who provide their code for free probably should vet the code before they make assumptions about its quality. And I fully agree that deliberately stirring shit without actually contributing anything meaningful to the community or the project is really just messed up behavior.
But the way I see it there's two different groups and they have very different views of this situation.
The people who can't code are consumers. Their contribution is to use the software if they want, and if it works for them to spread by word of mouth what they like about it. Maybe to donate if they can and the dev accepts donations.
If those people choose to boycott, it'll be on the basis of their moral feelings about the use of AI or at the recommendation of the second group due to quality.
The second group are the peer reviewers so to speak and they can and should both vet the code and sound the alarm if there's something wrong.
I suppose there's a third subset of people in the case of FOSS work who can and often do help with projects and I wonder if that is better or worse for the reasons listed in the thread like poorly human written code and simple mistakes.
Humans certainly aren't infallible. But at least they can tell you how they got the output they got or the reason why they did x. You can have a rational conversation with a human being and for the most part they aren't going to make something up unless they have an ulterior motive.
Perhaps breaking things down into tiny chunks makes AI better or it's outputs more usable. Maybe there's a 'sweet spot".
But I think people also get worried that what happens a lot is people who use AI often start to offload their own thinking onto it and that's dangerous for many reasons.
This person also admits to having depression. Depression can affect how you respond to information, how well you actually understand the information in front of you. It can make you forget things you know, or make things that much harder to recall.
I know that from experience. So in this case does the AI have more potential to help or do harm?
There's a lot to this. I have not personally used Lutris, but before this happened I wouldn't have thought twice about saying that I've heard good things about it if someone asked me for a Heroic launcher style software for Linux.
But just like the Ladybird fork of Firefox I don't know that I feel comfortable suggesting it if this is the state of things. For the same reason I don't currently feel comfortable recommending Windows 11 or Chrome.
There are so many sensitive things that OS's, and web browsers handle that people take for granted. If nobody was sounding the alarm about those, I feel like nothing would get better. By contrast, Lutris isn't swimming in a big pond of sensitive information but it is running on people's hardware and they should have both the right to be informed and the right to choose.




During a conversation with my sister about going back to school to finish her electrical engineering degree she basically said this:
(She went back to school to finish her degree)
She also mentioned that a lot of professors are kind of trying to walk the thin line between failing students (who will then go to places like "ratemyprofessor.com and leave what essentially amount to bad reviews which can threaten their employment), and passing students who aren't actually grasping the basics and I think social media is just compounding the problem because of that.
Imagine working in fast food and already getting complaints all the time and then having to worry about someone putting you on a rate my server website where they trash talk you and you have no recourse to have that information taken down.
At least with yelp it's not first and last names and it's the business that takes the flak.