FTA: The user considered it was the unpaid volunteer coders’ “job” to take his AI submissions seriously. He even filed a code of conduct complaint with the project against the developers. This was not upheld. So he proclaimed the project corrupt. [GitHub; Seylaw, archive]
This is an actual comment that this user left on another project: [GitLab]
As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).
Microsoft has set up copilot to make contributions for the dotnet runtime https://github.com/dotnet/runtime/pull/115762 I’m sure maintainers spends more time to review and interact with copilot than it would have to write it themselves
I am not a programmer and I think it’s silly to think that AI will replace developers.
But I was working through a math problem in Moscow Puzzles with my kiddo.
We had solved it, but I wasn’t sure he got it at a deep level. So I figured I’d do something in Excel or maybe just do cut outs. But I figured I’d try to find a web app that would do this better. Nothing really came up that was a good match. But then thought, let’s see how bad AI programming can be. I’d fought with it over some excel functions and it’s been mainly useful in pointing me in the right direction, but only occasionally getting me over the finish line.
After about 6 to 8 hours of work, a little debugging, havinf teach and quiz me occasionally, and some real frustration of pointing out that the feature previously changed and re-emeged, I eventually had something that worked.
The Shooting Range Simulator is a web-based application designed to help users solve a logic puzzle involving scoring points by placing blocks on vertical number lines.
A buddy developer friend of mine said: “I took a quick scroll through the code. Looks pretty clean, but I didn’t dive in enough to really understand it. Definitely all that css BS would take me ages to do without AI.”
I don’t take credit for this and don’t pretend that this was my work, but I know my kiddo is excited to try the tool. I hope he learns from it and we bond over a math problem.
I know that everyone is worried about this tool, but moments like those are not nothing. Personally, I’m a Luddite and think the new tools should be deployed by the people’s livelihood it will effect and not the business owners.
Personally, I’m a Luddite and think the new tools should be deployed by the people’s livelihood it will effect and not the business owners.
Thank you for correctly describing what a Luddite wants and does not want.
Yes, despite the irrational phobia amongst the Lemmings, AI is massively useful across a wide range of examples like you’ve just given as it reduces barriers to building something.
As a CS grad, the problem isn’t it replacing all programmers, at least not immediately. It’s that a senior software engineer can manage a bunch of AI agents, meaning there’s less demand for developers overall.
Same way tools like Wix, Facebook, etc came in and killed the need for a bunch of web developers that operated in the range for small businesses.
As a CS grad, the problem isn’t it replacing all programmers, at least not immediately. It’s that a senior software engineer can manage a bunch of AI agents, meaning there’s less demand for developers overall.
Yes! You get it. That right there proves that you’ll make it through just fine. So many in this thread denying that Ai is gonna take jobs. But you gave a great scenario.
If AI was good at coding, my game would be done by now.
I’ll admit I did used AI for code before, but here’s the thing. I already coded for years, and I usually try everything before last resort things. And I find that approach works well. I rarely needed to go to the AI route. I used it like for .11% of my coding work, and I verified it through stress testing.
Can Open Source defend against copyright claims for AI contributions?
If I submit code to ReactOS that was trained on leaked Microsoft Windows code, what are the legal implications?
If I submit code to ReactOS that was trained on leaked Microsoft Windows code, what are the legal implications?
None. There is a good chance that leaked MS code found its way into training data, anyway.
what are the legal implications?
It would be so fucking nice if we could use AI to bypass copyright claims.
“No officer, i did not write this code. I trained AI on copyright material and it wrote the code. So im innocent”
It’s not good because it has no context on what is correct or not. It’s constantly making up functions that don’t exist or attributing functions to packages that don’t exist. It’s often sloppy in its responses because the source code it parrots is some amalgamation of good coding and terrible coding. If you are using this for your production projects, you will likely not be knowledgeable when it breaks, it’ll likely have security flaws, and will likely have errors in it.
And I’ll keep saying this: you can’t teach a neural network to understand context without creating a generalised context engine, another word for which is AGI.
Fidelity is impossible to automate.
So you’re saying I’ve got a shot?
AI is at its most useful in the early stages of a project. Imagine coming to the fucking ssh project with AI slop thinking it has anything of value to add 😂
The early stages of a project is exactly where you should really think hard and long about what exactly you do want to achieve, what qualities you want the software to have, what are the detailed requirements, how you test them, and how the UI should look like. And from that, you derive the architecture.
AI is fucking useless at all of that.
In all complex planned activities, laying the right groundwork and foundations is essential for success. Software engineering is no different. You won’t order a bricklayer apprentice to draw the plan for a new house.
And if your difficulty is in lacking detailed knowledge of a programming language, it might be - depending on the case ! - the best approach to write a first prototype in a language you know well, so that your head is free to think about the concerns listed in paragraph 1.
the best approach to write a first prototype in a language you know well
Ok, writing a web browser in POSIX shell using yad now.
I’m going back to TurboBASIC.
writing a web browser in POSIX shell
Not HTML but the much simpler Gemini protocol - well you could have a look at Bollux, a Gemini client written im shell, or at ereandel:
https://github.com/kr1sp1n/awesome-gemini?tab=readme-ov-file#terminal
AI is only good for the stage when…
AI is only good in case you want to…
Can’t think of anything. Edit: yes, I really tried
Playing the Devils’ advocate was easier that being AI’s advocate.
I might have said it to be good in case you are pitching a project and want to show some UI stuff maybe, without having to code anything.
But you know, there are actually specialised tools for that, which UI/UX designers used, to show my what I needed to implement.
And when I am pitching UI, I just use a pencil and paper and it is so much more efficient than anything AI, because I don’t need to talk to something, to make a mockup, to be used to talk to someone else. I can just draw it in front of the other guy with 0 preparation, right as it came into my mind and don’t need to pay for any data center usage. And if I need to go paperless, there is Whiteboards/Blackboards/Greenboards and Inkscape.After having banged my head trying to explain code to a new developer, so that they can hopefully start making meaningful contributions, I don’t want to be banging my head on something worse than a new developer, hoping that it will output something that is logically sound.
AI is good for the early stages of a project … when it’s important to create the illusion of rapid progress so that management doesn’t cancel the project while there’s still time to do so.
Ahh, so an outsourced con
mancomputer.
Its good as a glorified autocomplete.
Except that an autocomplete, with simple, lightweight and appropriate heuristics can actually make your work much easier and will not make you have to read it again and again, before you can be confident about it.
True, and it doesn’t boil the oceans and poison people’s air.
Have you used AI to code? You don’t say “hey, write this file” and then commit it as “AI Bot 123 aibot@company.com”.
You start writing a method and get auto-completes that are sometimes helpful. Or you ask the bot to write out an algorithm. Or to copy something and modify it 30 times.
You’re not exactly keeping track of everything the bots did.
yeah, that’s… one of the points in the article
I’ll admit I skimmed most of that train wreak of an article - I think it’s pretty generous saying that it had a point. It’s mostly recounts of people complaining about AI. But if they hid something in there about it being remarkably useful in cases but not writing entire applications or features then I guess I’m on board?
Well, sometimes I think the web is flooded with advertising an spam praising AI. For these companies, it makes perfect sense because billions of dollars has been spent at these companies and they are trying to cash in before the tides might turn.
But do you know what is puzzling (and you do have a point here)? Many posts that defend AI do not engage in logical argumentation but they argue beside the point, appeal to emotions or short-circuited argumentation that “new” always equals “better”, or claiming that AI is useful for coding as long as the code is not complex (compare that to the objection that mathematics is simple as long it is not complex, which is a red herring and a laughable argument). So, many thanks for you pointing out the above points and giving in few words a bunch of examples which underline that one has to think carefully about this topic!
The problem is that you really only see two sorts of articles.
AI is going to replace developers in 5 years!
AI sucks because it makes mistakes!
I actually see a lot more of the latter response on social media to the point where I’m developing a visceral response to the phrase “AI slop”.
Both stances are patently ridiculous though. AI cannot replace developers and it doesn’t need to be perfect to be useful. It turns out that it is a remarkably useful tool if you understand its limitations and use it in a reasonable way.
Don’t forget all these artists and developers are staring unemployment in the face so it’s no wonder they phone it in when they “try” to use AI.
“Make me a program that does this complex thing across many systems… It didn’t work on the first try AI SLOP REEEEEEE!”
Forks suck at eating soup yet are still useful.
Great analogy! Even in this thread there are heaping amount of copium with people saying, “Meh, ai will never be able to do my job.”
I fucking promise in 5 years, ai will be doing the job they have right now. lol
it’s a car that only explodes once in a blue moon!
No, it’s a car that breaks down once you go faster than 60km/h. It’s extremely useful if you know what you’re doing and use it only for tasks that it’s good at.
if that’s the analogy yoou want, make it 20 kmh
Hey @dgerard@awful.systems, care to weigh in on this “train wreak [sic] of an article?”
I asked Github Copilot and it added
import wreak
to .NET, so we’ll get back to you.
I used it only as last resort. I verify it before using it. I only had used it for like .11% of my project. I would not recommend AI.
My dude, I very code other humans write. Do you think I’m not verifying code written by AI?
I highly recommend using AI. It’s much better than a Google search for most things.
Or to copy something and modify it 30 times.
This seems like a very bad idea. I think we just need more lisp and less AI.
Good point.
This is the point that the “AI will do it all” crowd is missing. Current AI doesn’t innovate. Full stop. It copies.
The need for new code written by folks who understand what they’re writing isn’t gone, and won’t go away.
Whether those folks can be AI is an open question.
Whether we can ever create an AI that can actually innovate is an interesting open question, with little meaningful evidence in either direction, today.
“Hey AI - Create a struct that matches this JSON document that I get from a REST service”
Bam, it’s done.
Or
"Hey AI - add a schema prefixed on all of the tables and insert statements in the SQL script.
People have such a hate boner for AI here that they are downvoting actual good use of it…
Yeah integrating APIs has really become trivial with copilots. You just copy paste the documentation and all the boring stuff is done in the blink of an eye ! I love it
It’s exactly the sort of “tedious yet not difficult” task that I love it for. Sometimes you need to clean things up a bit but it does the majority of the work very nicely.
If humans are so good at coding, how come there are 8100000000 people and only 1500 are able to contribute to the Linux kernel?
I hypothesize that AI has average human coding skills.
Average drunk human coding skils
A million drunk monkeys on typewriters can write a work of Shakespeare once in a while!
But who wants to pay a 50$ theater ticket in the front seat to see a play written by monkeys?
Well according to microsoft mildly drunk coders work better
The average coder is a junior, due to the explosive growth of the field (similar as in some fast-growing nations the average age is very young). Thus what is average is far below what good code is.
On top of that, good code cannot be automatically identified by algorithms. Some very good codebases might look like bad at a superficial level. For example the code base of LMDB is very diffetent from what common style guidelines suggest, but it is actually a masterpiece which is widely used. And vice versa, it is not difficult to make crappy code look pretty.
“Good code” is not well defined and your example shows this perfectly. LMDBs codebase is absolutely horrendous when your quality criterias for good code are Readability and Maintainability. But it’s a perfect masterpiece if your quality criteria are Performance and Efficiency.
Most modern Software should be written with the first two in mind, but for a DBMS, the latter are way more important.
Microsoft is doing this today. I can’t link it because I’m on mobile. It is in dotnet. It is not going well :)
Yeah, can’t find anything on dotnet getting poisoned by AI slop, so until you link it, I’ll assume you’re lying.
I guess they were referring to this.
OMG, this is gold! My neighbor must have wondered why I am laughing so hard…
The “reverse centaur” comment citing Cory Doctorow is so true it hurts - they want that people serve machines and not the other way around. That’s exactly how Amazon’s warehouses work with workers being paced by facory floor robots.
My theory is not a lot of people like this AI crap. They just lean into it for the fear of being left behind. Now you all think it’s just gonna fail and it’s gonna go bankrupt. But a lot of ideas in America are subsidized. And they don’t work well, but they still go forward. It’ll be you, the taxpayer, that will be funding these stupid ideas that don’t work, that are hostile to our very well-being.
Ask Daniel Stenberg.
who makes a contribution made by aibot514. noone. people use ai for open source contributions, but more in a ‘fix this bug’ way not in a fully automated contribution under the name ai123 way
Counter-argument: If AI code was good, the owners would create official accounts to create contributions to open source, because they would be openly demonstrating how well it does. Instead all we have is Microsoft employees being forced to use and fight with Copilot on GitHub, publicly demonstrating how terrible AI is at writing code unsupervised.
Bingo
Bing. O.
Big O
Yes, that’s exactly the point. AI is terrible at writing code unsupervised, but it’s amazing as a supportive tool for real devs!
Mostly closed source, because open source rarely accepts them as they are often just slop. Just assuming stuff here, I have no data.
To be fair if a competent dev used an ai “auto complete” tool to write their code, I’m not sure it’d be possible to detect those parts as an ai code.
I generally dislike those corporate AI tools but gave a try for copilot when writing some terraform script and it actually had good suggestions as much as bad ones. However if I didn’t know that well the language and the resources I was deploying, it’d probably have led me to deep hole trying to fix the mess after blindly accepting every suggestion
They do more than just autocomplete, even in autocomplete mode. These Ai tools suggest entire code blocks and logic and fill in multiple lines, compared to a standard autocomplete. And to use it as a standard autocomplete tool, no Ai is needed. Using it like that wouldn’t be bad anyway, so I have nothing against it.
The problems arise when the Ai takes away the thinking and brain functionality of the actual programmer. Plus you as a user get used to it and basically “addicted”. Independent thinking and programming without Ai will become harder and harder, if you use it for everything.
They do more than just autocomplete, even in autocomplete mode. These Ai tools suggest entire code blocks and logic and fill in multiple lines,
We know. “Improved autocomplete” is still an accurate (the most accurate) description for what current generation AI can do.
When compared to current autocomplete, AI is a delight. (Though it has a long way to go to improve at not adding stupid bullshit. But I’m confident that will get better.)
When measured against a true intelligence, I know I’m interacting with a newbie or a con man, because there’s no honest reason an informed person would even consider making that comparison.
People seem to think that the development speed of any larger and more complex software depends on the speed the wizards can type in code.
Spoiler: This is not the case. Even if a project is a mere 50000 lines long, one is the solo developer, and one has a pretty good or even expert domain knowledge, one spends the mayor part of the time thinking, perhaps looking up documentation, or talking with people, and the key on the keyboard which is most used doesn’t need a Dvorak layout, bevause it is the “delete” key. In fact, you don’t need yo know touch-typing to be a good programmer, what you need is to think clearly and logically and be able to weight many different options by a variety of complex goals.
Which LLMs can’t.
I don’t think it makes writing code faster, just may reduce the number of key presses required
And when they contribute to existing projects, their code quality is so bad, they get banned from creating more PRs.
Creator of curl just made a rant about users submitting AI slop vulnerability reports. It has gotten so bad they will reject any report they deem AI slop.
So there’s some data.