Nice. Software developer, gamer, occasionally 3d printing, coffee lover.

  • 1 Post
  • 329 Comments
Joined 3 years ago
cake
Cake day: July 1st, 2023

help-circle

  • My work has been pushing BMAD method hard on us software engineers and while I do agree using those prompts, scaffolding, and alot of babysitting, you can get fairly good results - it takes more effort than actually just writing the code.

    Devs are going to get lazy and stop handholding and steering the LLMs, and since we’re now looping LLMs into code review, it’s going to miss stuff as well.








  • Around 6 a day. 1 for the drive to work, one guy my first walk at work (~11AM), another one my second (~1PM) and another on my third (~3-4PM). Then when I get home another so around 7PM and usually 1 or 2 more later on. If I’m having racing thoughts or something hampering my sleep I’ll have another then as well. My ADHD manifests stimulants as calming, and coffee was one of the ways I self treated before I was properly getting treatment.


  • Zikeji@programming.devtoLemmy Shitpost@lemmy.worldcongrats!
    link
    fedilink
    English
    arrow-up
    24
    ·
    17 days ago

    Ditto. I’ll hear people disparage because “why did they get addicted in the first place” and it frustrates me - I used to work IT for a company with a dedicated facility for people with criminal records. I met many recovered addicts and the most common cause of their addiction was being prescribed opioids.

    But I don’t think people need an excuse anyway. Life isn’t perfect, far from it.











  • I mean, it’s very possible an it was written by “an AI” (an LLM). For all we know the prompt the user gave it was something along the line of “get your pull requests accepted no matter the cost” and it’s fancy text prediction decided, in it’s ever ongoing roleplay, that the targeted blog post would shame the developer into accepting it’s PR.

    I definitely don’t under the paranoia though. I don’t understand how people are convincing themselves any of this so close to actual intelligence. Ask your fancy LLM how to fix your cup that "is sealed at the top and “open at the bottom” or if you should drive to the car wash to get a car wash if it’s only 100ft away - both scenarios obvious to most any human and will need to be trained out of the current leading LLMs (if they haven’t been patched already).