I'm curious though how much vibe coding and AI mandates are responsible for the latest disaster of a patch. Or if they are all just results of disfunctional company culture.
Dang, I want to find this article more relatable than I do. Most software I have dev experience with doesn't have the problem of relying on automated tests too much, but the exact opposite.
And while I very much write tests for the dopamine high and false sense of security green checkmarks provide, I still prefer that to the real sense of un-security of not having tests.
I think I installed the cursed Windows 11 update on my work machine, because after taking several tries to boot, my second monitor stopped working (detected, but showing a black screen).
Tried some different configurations, and could make only 0-1 screens work.
Uninstalled the update and everything worked correctly again.
Against my better judgement I got into an argument with a promptfan on Bluesky. To his credit, aside from the usual boring arguments ("models are getting better, and better", "have you tried model xyz", "everyone not using chatbots will be left in the dust" he provided an actual example.
https://github.com/dfed/SafeDI/issues/183
It's a bug that's supposedly easy to test, but hard to reason about. Took the chatbot half an hour while it would take him several (allegedly).
Now, my first thought was: "If a clanker could do it (something that famously can't reason) then it couldn't be that hard to reason about."
But I was curious so I looked. Unfortunately it is an area I'm not familiar with and in a language (Swift) I don't know at all.
Probably should file the claim under "not true or false" and touch grass or something, but it's bugging me.
Any one y'all who could say if there's something interesting in there?
Tim traveller? The YouTube channel?
https://youtube.com/@thetimtraveller