Hello all, long time lurker, sometimes poster. In my line of work some of my co workers seem eager to turn to the clanker to get an instant answer to any road block. I feel its better to problem solve the old fashioned way. With some good old research and finding a blog that is not AI slop LOL. Do those of you in a support role find any peer pressure to use LLMs?

  • Boe@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    3
    ·
    9 hours ago

    Really, it depends. With good observability and the know how to make the LLM read the correct data from an MCP connected to the observably platform, you can zero in on the cause of a complicated incident very quickly. That said, there are a lot of issues that just can’t be solved, or even discovered this way. Typically, I put on my detective hat and get as much info as possible, then lean in on the help when I get stuck. I’ve found using them as a tool rather than an answer machine to be the best way around this. More often than not, I find the answer before the tool becomes the next logical step.

  • lonesomeCat@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    13 hours ago

    Yup. Lots of peer pressure to use LLMs here. They say it’ll make me ship faster. Well I prefer to understand what I did since I’m sure I’ll need that info for later debugging

  • ɔiƚoxɘup@infosec.pub
    link
    fedilink
    arrow-up
    1
    ·
    11 hours ago

    I like using it for finding documentation that explains why x doesn’t work or explain what and why.

    Just getting answers is hollow. Having a research assistant speeds up the learning process. To me that is worthwhile.

    • flandish@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      11 hours ago

      I tell people if they are gonna use it, use it like a rubber duck or a RAG on devdocs.io. Wish i could limit a query to just devdocs honestly.

      • ɔiƚoxɘup@infosec.pub
        link
        fedilink
        arrow-up
        1
        ·
        9 hours ago

        More often than not, that’s the kind of usage that helps me to have it talk around the perimeter of a subject to the point where it stimulates me to think about the actual solution.

  • frog_brawler@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 days ago

    I feel bad for anyone who has to do support. I also feel bad for anyone that is forced into using Windows 11.

    If an AI tool lets support teams paste whatever random error Windows AI slop has created for end users and get the fix in a few seconds rather than reading post after post of snark and bad theories on stack overflow - I’m actually 100% on board with that. That’s not at all the problem I have with AI. I get no sense of satisfaction having wasted any time or energy googling shit that the person that needs help can’t google themselves. Also, like I said, I feel bad for folks that do nothing but support.

    • purplemonkeymad@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      I don’t have issues with people who ask for support.

      Now when people use an llm to draft messages to you, and insist that the hallucination is correct, it’s infuriating. You also get people asking to implement policies that have never had some one sanity check them.

      One was a policy that required support to first id people using teams before a password reset. How are they doing that without getting logged in? Nevermind that we don’t have a database of people’s pictures as we were external.

  • Lumelore (She/her)@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    33
    ·
    2 days ago

    I also do it the old fashioned way. There was a point in time where I did try to use LLMs and I noticed they were severely rotting my brain after attempting to problem solve without using one. It wasn’t even a difficult problem either. My brain has started to regenerate since swearing off AI and I’ve also noticed that solving a problem without AI gives me more good happy brain chemicals. It’s just so much nicer to think for yourself than to have the machine spit slop at you.

    • vaderaj@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      13 hours ago

      What I try to do is the classic search the problem in an abstract manner but use LLM instead of web search. Once it predicts the output, I search the documentation to understand what the function is, return type etc…

      Finally write a version of code that works, what an LLM helps with is, it makes abstract searching easier which means I can’t use the same for searching facts (which unfortunately many people do and argue that’s is the truth)

      • bungusbread@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        9 hours ago

        You would be correct. They hallucinate anywhere from 5-50%+ of the time depending on task complexity and there’s literally no way to make them stop. Relying on them for information is genuinely akin to asking a paranoid schizophrenic living in your pocket what they think you should do.

      • Zos_Kia@jlai.lu
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        2 days ago

        For me the sweet spot is decisions that are trivial once you have the information at hand, but collecting the information is painful. Anything that requires strategic thinking I’ll do myself because what is your edge if you only take average strategic decisions?

  • originalucifer@moist.catsweat.com
    link
    fedilink
    arrow-up
    26
    arrow-down
    2
    ·
    2 days ago

    old fashioned for me was combing the newsgroups hoping some poor schlop was in the same boat. painful.

    no, i dont particularly think its necessary for young folk to be tortured because i was. if search tools are better to find the same obscure reference, then it doesnt matter.

    it matters when they dont understand the solution… sometimes the journey to finding an answer is a training session all on its own into whatever context… if youre just handed an answer you might not care why it works which hinders growth. i still dont think we should force people to suffer just because we did. there has to be a happy medium.

    ill use the llm tools where they fit and offer efficiency, which is fairly narrow for me.

    • DietCanesSauce@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      2 days ago

      This is exactly how I feel. I am rather new to the industry, still in an entry level position, but I have been tasked with building out an AI chatbot in our support team.

      I accepted the task in hopes I can make the bot point people to references to read more on rather than give them the answer they seek outright, so that they hopefully understand why something worked and something else didn’t.

      My goal is to make it easier to find those obscure references rather than regurgitate the source in a 2000 word slop response.

      • originalucifer@moist.catsweat.com
        link
        fedilink
        arrow-up
        5
        ·
        2 days ago

        2000 word slop response.

        omg yes. half the battle is sorting the signal from the noise returned by the llm… most of which appears as a ‘coloring’… some attempt to humanize the response. copilot spends more time telling me how awesome i am than spitting out the regex or direct link i want. STFU already.

        • DietCanesSauce@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          Yeah I have actually been pleasantly surprised with how the output can be structured by providing it with additional instructions to specialize its role.

          The ability to control its verbosity to a certain degree means that I can cut out the “You are correct, here are 20 bullet points to show you why”. I can also kind of turn it into an internal documentation search engine that can search our support ticket db, codebase, and documentation articles at the same time.

          Still very new to designing LLM agents and AI in general, but I am glad my team and our department seems willing to do things right and roll it out slowly even with pressure from the C Suite to roll it out right away. I don’t trust any LLM to do any particular task in my role, but it’s decent at gathering information quickly since it is literally what it’s been designed to do.

          I just wish we stopped getting posters generated by copilot for company events. They creep me out tbh.

  • Tartas1995@discuss.tchncs.de
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    I feel like either it is missing context or it takes too much time to provide it context or it would require sharing the whole code base which problematic for obvious reasons.