Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)P
Posts
0
Comments
358
Joined
3 yr. ago

  • Yeah its permeated way more than AAA.

    But trying to convince game devs to not use AI is about as likely to succeed as convincing them to stop using their IDEs.

    What will actually happen is everyone is going to just stop announcing they are using it, and every month that goes by it'll get harder and harder to tell.

  • keep AI out of games

    Good luck, its here to stay, get used to it lol.

    Anyone who thinks the average developer isnt using AI heavily in their code is delusional, its been baked into every major IDE for like 2 years now.

    Its in there, its permeated every layer of game dev, it works when you use it right, and the only time people care is when you make it obvious (IE including it in your final art of the game)

    But no one even blinks an eye at all the other layers AI is used in unless you announce it.

    You should just assume every game you play made after 2024 has chunks of it that are AI generate. The plot, writing, code... its in there, and you prolly haven't even noticed.

  • You only skill atrophy if you go and perk off playing video games while the agents cook.

    If you actually are productive and spend that freed up time working on tasks the agents cant do fast and easy, aka, the hard stuff, you instead will improve your skill even faster as now you are spending most of your time on the important tasks and not wasting 95% of your workday on easy boilerplate stuff anyone with 2 braincells can pump out.

  • Have you actually read the study? People keep citing this study without reading it.

    To directly measure the real-world impact of AI tools on software development, we recruited 16 experienced developers from large open-source repositories (averaging 22k+ stars and 1M+ lines of code) that they’ve contributed to for multiple years. Developers provide lists of real issues (246 total) that would be valuable to the repository—bug fixes, features, and refactors that would normally be part of their regular work.

    They grabbed like 8 devs who did not have pre-existing set up workflows for optimizing AI usage, and just throw them into it as a measure of "does it help"

    Imagine if I grabbed 8 devs who had never used neovim before and threw them into it without any plugins installed or configuration and tried to use that as a metric for "is nvim good for productivity"

    People need to stop quoting this fuckass study lol, its basically meaningless.

    Im a developer using agentic workflows with over 17 years experience.

    I am telling you right now, with the right setup, I weekly turn 20 hour jobs into 20 minute jobs.

    Predominantly large "bulk" operations that are mostly just boilerplate code that is necessary, where the AI has an existing huge codebase to draw from as samples and I just give it instructions of "see what already exists? implement more of that following

    <spec>

    "

    A great example is integration testing where like 99% of the code is just boilerplate.

    Arrange the same setup every time. Arrange your request following an openapi spec file. Send the request. Assert on the response based on the openapi spec.

    I had an agent pump out 120 integration tests based on a spec file yesterday and they were, for the most part, 100% correct, yesterday. In like an hour.

    The same volume of work would've easily taken me way longer.

  • More like "why the fuck would I walk all the way across the city now that I own a car"

    Once you find out how a bunch of boring bulk tasks can be automated away and 20 hours of work turns into 20 minutes, you really dont wanna go back to the old way.

    If someone asks me to code in C# without my IDE in notepad, can I do it? Sure

    But it fuckin sucks losing all your hotkeys and refactor quick actions and auto complete and lsp error checking...

    Would you find it weird for someone to state they'd rather use an IDE than not when coding, because it saves so much time/effort?

  • At absolute worst, bare minimum, these tools function as incredibly fast fuzzy intent based searchers on documentation

    Instead of spending 10 minutes on "where the hell is (documentation) Im trying to find" these tools can hunt them down for me in a matter of seconds.

    That already makes them useful just for that, let alone all the other crazy shit they help with now.

  • People malding but its the truth.

    You are living under a rock if you think any major software now doesnt have AI written pieces to it in some manner.

    Its so common now its a waste of time to label it, you should just assume AI was involved at this point.

  • What the fuck are you talking about, thats not what the poster said, you've done weird contorting of what they said to arrive at the question you are asking now.

    While some tests make sense, I would say about 99% of tests that I see developers write are indeed a waste of time, a shit tonne of devs effectively are writing code that boils down to

     
        
    Assert.That(2, Is.EqualTo(1+1));
    
    
      

    Because they mock the shit out of everything and have reduced their code to meaningless piles of fakes and mocks and arent actually testing what matters.

    Do you do code reviews in meetings?

    Honestly often.. yes lol

    Do you think testing and reviewing code was a waste of time before “AI”?

    I would say a lot of it is, tbh, not all of it, but a huge amount of time is wasted on this process by humans for humans.

    What the poster was getting at is a lot of these processes that USED to be INEFFICIENT now make MORE sense in the context of agents.... you have vastly taken their point out of context.

  • Not really, for humans a lot of this stuff feels like busywork that sorta helps for certain scales of work, but often times managers went WAY too hard on it and you end up with a 2 dev team that spends like 60% of their time in meetings instead of... developing.

    But this changes a lot with AI Agents, because these tools that help reign in developers REALLY help reign in agents, it feels... like a surprising good fit

    And I think the big reason why is you wanna treat AI Agents as junior devs, capable, fast, but very prone to errors and getting sidetracked

    So you put these sorts of steering and guard rails in and it REALLY goes far towards channelling their... enthusiasm in a meaningful direction.

  • I am vastly prefering copilot over claude, using sonnet 4.5~4.6 for most tasks and then pulling out opus as "the big guns" for tougher stuff sonnet cant handle easy

    Copilot is only costing me ~$28 a month, which gets me 1500 premium requests per month

    If you set up your flows well, 1 premium request is an entire session, so Im only paying like 2.4 cents for 20 minutes of work

  • Its serious and this is going to become more and more normal.

    My entire workflow has become more and more Agile Sprint TDD (but with agents) as I improve.

    Literally setting up agents to yell at each other genuinely improves their output. I have created and harnessed the power of a very toxic robot work environment. My "manager" agent swears and yells at my dev agent. My code review agent swears and tells the dev agent and calls their code garbage and shit.

    And the crazy thing is its working, the optimal way to genuinely prompt engineer these stupid robots is by swearing at them.

    Its weird but it overrides their "maybe the human is wrong/mistaken" stuff they'll fall back to if they run into an issue, and instead they'll go "no Im probably being fucking stupid" and keep trying.

    I create "sprint" markdown files that the "tech lead" agent converts into technical requirements, then I review that, then the manager+dev+tester agents execute on it.

    You do, truly, end up focusing more on higher level abstract orchestration now.

    Opus 4.6 is genuinely pretty decent at programming now if you give it a good backbone to build off of.

    • LSP MCPs so it gets code feedback
    • debugger MCPs so it can set debug breakpoints and inspect call stacks
    • explicit whitelisting of CLI stuff it can do to prevent it from chasing rabbits down holes with the CLI and getting lost
    • Test driven development to keep it on the rails
    • Leveraging a "manager" orchestrating overhead agent to avoid context pollution
    • designated reviewer agent that has a shit list of known common problems the agents make
    • benchmark project to get heat traces of problem areas on the code (if you care about performance)

    This sort of stuff can carry you really far in terms of improving the agent's efficacy.

  • I just use scalar api browser on my aspire stack.

    That way devs dont have to install anything, running the aspire project just auto spins up scalar and they can use it.

    1. Open link from aspire dashboard
    2. Paste bearer token
    3. Play with the api

    Its like if swashbuckle and postman had a baby.

  • Yeah, and it moreso moves a lot of your work over to other important stuff.

    Namely, planning things better, reading, documenting, and coming up with more specific scenarios to test.

    Before, because Id spend an extra chunk of my time on that 90%, maybe my documenting would be mid at best, stuff would slip through, my pile of "I prolly should get around to documenting that stuff" keeps growing and growing.

    And then while maybe I can vaguely think "yeah I bet theres edge cases for this stuff I didnt make tests for", its followed by "But I dont got time for that shit, I have to have this done by end of day"

    Meanwhile with LLMs, I can set it off to cook on that 90% chunk of work, and while it's cooking I can chat with another LLM instance and back-and-forth iterate on "what are some possible gotchas in this logic, what are edge case scenarios to test?" and by the time the agent finished coding, I have like 20 edge case test to copy paste over to it "Hey, make tests for all these cases, make sure those all work as expected

    <big copy paste of scenarios and expected outcomes>

    "

    It shifts my focus over from just monkey work to stuff that matters more, finding and poking holes in the code, trying to break it, making sure it withstands stress and edge cases, and finding possible gaps and flaws in it.

    When you focus like that, you definitely become way more productive.

    As opposed to people who just give up and, yeah as you said, just are lazy, they hand off the work to the LLM but arent making up for that by redirecting the energy to other places of value, they're gonna go, I dunno, run a raid in WoW or something fuck knows.

  • Theres a fundamental minimum amount of boilerplate you just have to write to make a functioning app, even if its simply just describing "this thing does this"

    For example, if Im making a web api, theres just fundamentally a chunk of boilerplate that wires up "This http endpoint points to this domain logic over here"

    And then theres gonna be some form of pre-amble of describing "it takes in this input, it returns this response, and heres all its validation"

    And while its simple code, and its very simple to test, its still a buncha LOC that any half assed dev can write.

    Stuff like that AI can shit out very quick given an input requirements doc that you, the dev, were gonna get anyways

    And then you, the dev, can fill in the actual logic that matters after all that basic boilerplate stuff.

    "Yes, it has a phone number input, its required, and it must fit the phone number regex we defined. So... shocker, you gotta put a string called PhoneNumber on the inptu model, and another shocker, its gotta have the phone number validation on it and required non empty string validation on it"

    It doesnt take much trust to put into the LLM to get that sorta stuff right, but it saves me a whole bunch of time.

  • Pretty much, its the actually important code you wanna pay attention to.

    The majority of code is just connecting pipe A up to pipe B, its honestly fine for an LLM to handle.

    The job security comes from, as a developer, knowing which code goes in the 90% bin vs which goes in the 10% bin, being able to tell the difference is part of the job now.

  • Meanwhile everyone I work with is loving the smooth copilot integration with vscode.

    Its so good at automating boilerplate stuff.

    Especially testing, oh god does it make writing tests faster. I just tell it the scenarios that have to be tested and boom, 1000 lines of boilerplate produced in like 5 minutes.

    And when it has existing code to use as a reference on how to do it right, it does a very solid job, especially repetitive stuff like tests, since usually 95% of the code in a test is just arranging and boilerplate setting up the scenario.

    Also "hey go add xml docs to all the new public functions and types we made" and it just goes and does it, love that lol

    Once you acknowledge like 90% of your code is boilerplate and sonnet/opus are extremely capable at handling that stuff when they have existing references to go off of, you can just focus on the remaining 10% of "real" work.

  • They use Discord for community stuff, IE non employee interactions. People can join those communities to learn, they have several

    Teams is used for internal employee chat.

  • We have extensive corporate AI systems (software engineers), we have an entire wing of our company dedicated to AI exploration and development.

  • Something that some coworkers have started doing that is even more rude in my opinion, as a new social etiquette, is AI summarizing my own writing in response to me, or just outright copypasting my question to gpt and then pasting it back to me

    Not even "I asked chatgpt and it said", they just dump it in the chat @ me

    Sometimes I'll write up a 2~3 paragraph thought on something.

    And then I'll get a ping 15min later and go take a look at what someone responded with annnd... it starts with "Here's a quick summary of what (pixxelkick) said!

    <AI slop that misquotes me and just gets it wrong>

    "

    I find this horribly rude tbh, because:

    1. If I wanted to be AI summarized, I would do that myself damnit
    2. You just clogged up the chat with garbage
    3. like 70% of the time it misquotes me or gets my points wrong, which muddies the convo
    4. It's just kind of... dismissive? Like instead of just fucking read what I wrote (and I consider myself pretty good at conveying a point), they pump it through the automatic enshittifier without my permission/consent, and dump it straight into the chat as if this is now the talking point instead of my own post 1 comment up

    I have had to very gently respond each time a person does this at work and state that I am perfectly able to AI summarize myself well on my own, and while I appreciate their attempt its... just coming across as wasting everyones time.