Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)P
Posts
0
Comments
346
Joined
3 yr. ago

  • Yeah, and it moreso moves a lot of your work over to other important stuff.

    Namely, planning things better, reading, documenting, and coming up with more specific scenarios to test.

    Before, because Id spend an extra chunk of my time on that 90%, maybe my documenting would be mid at best, stuff would slip through, my pile of "I prolly should get around to documenting that stuff" keeps growing and growing.

    And then while maybe I can vaguely think "yeah I bet theres edge cases for this stuff I didnt make tests for", its followed by "But I dont got time for that shit, I have to have this done by end of day"

    Meanwhile with LLMs, I can set it off to cook on that 90% chunk of work, and while it's cooking I can chat with another LLM instance and back-and-forth iterate on "what are some possible gotchas in this logic, what are edge case scenarios to test?" and by the time the agent finished coding, I have like 20 edge case test to copy paste over to it "Hey, make tests for all these cases, make sure those all work as expected

    <big copy paste of scenarios and expected outcomes>

    "

    It shifts my focus over from just monkey work to stuff that matters more, finding and poking holes in the code, trying to break it, making sure it withstands stress and edge cases, and finding possible gaps and flaws in it.

    When you focus like that, you definitely become way more productive.

    As opposed to people who just give up and, yeah as you said, just are lazy, they hand off the work to the LLM but arent making up for that by redirecting the energy to other places of value, they're gonna go, I dunno, run a raid in WoW or something fuck knows.

  • Theres a fundamental minimum amount of boilerplate you just have to write to make a functioning app, even if its simply just describing "this thing does this"

    For example, if Im making a web api, theres just fundamentally a chunk of boilerplate that wires up "This http endpoint points to this domain logic over here"

    And then theres gonna be some form of pre-amble of describing "it takes in this input, it returns this response, and heres all its validation"

    And while its simple code, and its very simple to test, its still a buncha LOC that any half assed dev can write.

    Stuff like that AI can shit out very quick given an input requirements doc that you, the dev, were gonna get anyways

    And then you, the dev, can fill in the actual logic that matters after all that basic boilerplate stuff.

    "Yes, it has a phone number input, its required, and it must fit the phone number regex we defined. So... shocker, you gotta put a string called PhoneNumber on the inptu model, and another shocker, its gotta have the phone number validation on it and required non empty string validation on it"

    It doesnt take much trust to put into the LLM to get that sorta stuff right, but it saves me a whole bunch of time.

  • Pretty much, its the actually important code you wanna pay attention to.

    The majority of code is just connecting pipe A up to pipe B, its honestly fine for an LLM to handle.

    The job security comes from, as a developer, knowing which code goes in the 90% bin vs which goes in the 10% bin, being able to tell the difference is part of the job now.

  • Meanwhile everyone I work with is loving the smooth copilot integration with vscode.

    Its so good at automating boilerplate stuff.

    Especially testing, oh god does it make writing tests faster. I just tell it the scenarios that have to be tested and boom, 1000 lines of boilerplate produced in like 5 minutes.

    And when it has existing code to use as a reference on how to do it right, it does a very solid job, especially repetitive stuff like tests, since usually 95% of the code in a test is just arranging and boilerplate setting up the scenario.

    Also "hey go add xml docs to all the new public functions and types we made" and it just goes and does it, love that lol

    Once you acknowledge like 90% of your code is boilerplate and sonnet/opus are extremely capable at handling that stuff when they have existing references to go off of, you can just focus on the remaining 10% of "real" work.

  • They use Discord for community stuff, IE non employee interactions. People can join those communities to learn, they have several

    Teams is used for internal employee chat.

  • We have extensive corporate AI systems (software engineers), we have an entire wing of our company dedicated to AI exploration and development.

  • Something that some coworkers have started doing that is even more rude in my opinion, as a new social etiquette, is AI summarizing my own writing in response to me, or just outright copypasting my question to gpt and then pasting it back to me

    Not even "I asked chatgpt and it said", they just dump it in the chat @ me

    Sometimes I'll write up a 2~3 paragraph thought on something.

    And then I'll get a ping 15min later and go take a look at what someone responded with annnd... it starts with "Here's a quick summary of what (pixxelkick) said!

    <AI slop that misquotes me and just gets it wrong>

    "

    I find this horribly rude tbh, because:

    1. If I wanted to be AI summarized, I would do that myself damnit
    2. You just clogged up the chat with garbage
    3. like 70% of the time it misquotes me or gets my points wrong, which muddies the convo
    4. It's just kind of... dismissive? Like instead of just fucking read what I wrote (and I consider myself pretty good at conveying a point), they pump it through the automatic enshittifier without my permission/consent, and dump it straight into the chat as if this is now the talking point instead of my own post 1 comment up

    I have had to very gently respond each time a person does this at work and state that I am perfectly able to AI summarize myself well on my own, and while I appreciate their attempt its... just coming across as wasting everyones time.

  • I dont need a 5 minute podcast to explain the fact that LLMs are literally selecting the statistically most common answers, because thats literally what an LLM is

    Thats like being surprised that the top 5 correct answers on family feud were the most commonly picked answers from the poll.

    Dawg... thats literally how it works lol.

  • If the site is meant to be a mechanism to circumvent policy with no other use case, it will probably get banned.

    Thats not "free speech", and free speech us only a protected right up until it starts to impact other human rights.

    People seem to forget that free speech is not the only protected right, and that in most countries its definitely not #1 on the priority list, usually several other rights come before it.

  • The site will be hosted at "freedom.gov," the sources said

    So... they will just ban that too then, what a waste of time and money lol

  • You... understand the Cortana on windows is literally named as a reference to Cortana from Halo right? Because Microsoft owns Halo...? They named it from her... So its the same "Cortana" so to say.

    Its a fundamental counterpoint to the OPs post, because it wasn't made feminine as some kind of mental gymnastics misogyny thing, its just a nerdy reference to an already existing character Microsoft had the rights to, its not that deep.

  • Cortana was female.

    I pointed this out in another post but Cortana is also a "higher level" AI in her depiction origins (Halo) which is inherently a counterpoint to the OP's post anyways...

  • It counts as an assistant, and its a gender neutral name cuz its a place name, not a person name.

    Its name was specifically chosen to be weird and different to avoid the "accidently triggers in common convo" problem that other assistants tend to have

  • The name was chosen because it was a play on the SRI technology and because it is a girl’s name.

    The name was actually chosen because it was originally going to be the name of one of the founder's soon to be daughter, but then his child ended up being a son so he gave the name to the machine instead as his "second child" effectively...

    So it had literally nothing to do with whatever point the poster was trying to make, and everything to do with a sense of paternal love if literally anything, lol... People will find literally fucking anything to mald over, even making shit up to try and make it sound right.

  • Yeah but also is a heavy counterpoint to the point in the post, because Cortana was already a "higher level of autonomy" AI in her first depiction (Halo games), from the start, and Microsoft named it after the character because Microsoft bought Halo and was just doing a nod to the character... So thats literally an outright counterpoint to whatever mental gymnastics the poster of the post was doing...

  • Siri is definitely a gender neutral name, Ive literally met more dudes named Siri than gals when out traveling, especially in eastern Europe.

  • Literally only one AI assistant Im aware of was given a feminine persona out the gate and thats Alexa which is Amazon's.

    Every single other one has been purposefully kept gender neutral.

    They intentionally gave Siri a gender neutral name ages ago cuz you can pick what its voice sounds like

    Same for gemini, copilot, gpt....

    Only 1 out of many agents had a female name, and it wasnt "tech bros" that named it.

    And only one tool has been given a male name, Claude

  • Thats it? Youd think after this many years of war, the proverbial thumbscrews would have been tightened way more than that on russia

    What is this limp wristed version of trying to fight back? Way too many countries are still letting Russia get away with this shit.

  • Canadians are chomping at the bit here to get a reason to go burn down the Whitehouse again.