• 0 Posts
  • 29 Comments
Joined 11 months ago
cake
Cake day: June 26th, 2024

help-circle




  • svtdragon@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    2
    ·
    edit-2
    1 month ago

    I just spent about a month using Claude 3.7 to write a new feature for a big OSS product. The change ended up being about 6k loc with about 14k of tests added to an existing codebase with an existing test framework for reference.

    For context I’m a principal-level dev with ~15 years experience.

    The key to making it work for me was treating it like a junior dev. That includes priming it (“accuracy is key here; we can’t swallow errors, we need to fail fast where anything could compromise it”) as well as making it explain itself, show architecture diagrams, and reason based on the results.

    After every change there’s always a pass of “okay but you’re violating the layered architecture here; let’s refactor that; now tell me what the difference is between these two functions, and shouldn’t we just make the one call the other instead of duplicating? This class is doing too much, we need to decompose this interface.” I also started a new session, set its context with the code it just wrote, and had it tell me about assumptions the code base was making, and what failure modes existed. That turned out to be pretty helpful too.

    In my own personal experience it was actually kinda fun. I’d say it made me about twice as productive.

    I would not have said this a month ago. Up until this project, I only had stupid experiences with AI (Gemini, GPT).












  • According to some cursory research (read: Google), obstacle avoidance uses ML to identify objects, and uses those identities to predict their behavior. That stage leaves room for the same unpredictability, doesn’t it? Say you only have 51% confidence that a “thing” is a pedestrian walking a bike, 49% that it’s a bike on the move. The former has right of way and the latter doesn’t. Or even 70/30. 90/10.

    There’s some level where you have to set the confidence threshold to choose a course of action and you’ll be subject to some ML-derived unpredictability as confidence fluctuates around it… right?



  • I’m a pretty progressive guy and I don’t think there’s much in here to disagree with. The only nit I would pick is that inertia isn’t a great argument to keep things the way they are. That is, “we’ve always done it this way” isn’t a great reason to do anything.

    Your framing of conservatism is in line with the Eisenhower era when we weren’t linked into this existential crisis about the concept of governance. But for the last twenty years (at least) the American right has been against the very idea that the government should govern.

    The left is trying to argue about who it should serve, taking its existence as a precondition, and the right is trying to dismantle it without regard for who it serves. As a result, we’re pretty much irrecoverably talking past each other.