Hello all, long time lurker, sometimes poster. In my line of work some of my co workers seem eager to turn to the clanker to get an instant answer to any road block. I feel its better to problem solve the old fashioned way. With some good old research and finding a blog that is not AI slop LOL. Do those of you in a support role find any peer pressure to use LLMs?


omg yes. half the battle is sorting the signal from the noise returned by the llm… most of which appears as a ‘coloring’… some attempt to humanize the response. copilot spends more time telling me how awesome i am than spitting out the regex or direct link i want. STFU already.
Yeah I have actually been pleasantly surprised with how the output can be structured by providing it with additional instructions to specialize its role.
The ability to control its verbosity to a certain degree means that I can cut out the “You are correct, here are 20 bullet points to show you why”. I can also kind of turn it into an internal documentation search engine that can search our support ticket db, codebase, and documentation articles at the same time.
Still very new to designing LLM agents and AI in general, but I am glad my team and our department seems willing to do things right and roll it out slowly even with pressure from the C Suite to roll it out right away. I don’t trust any LLM to do any particular task in my role, but it’s decent at gathering information quickly since it is literally what it’s been designed to do.
I just wish we stopped getting posters generated by copilot for company events. They creep me out tbh.