Hello all, long time lurker, sometimes poster. In my line of work some of my co workers seem eager to turn to the clanker to get an instant answer to any road block. I feel its better to problem solve the old fashioned way. With some good old research and finding a blog that is not AI slop LOL. Do those of you in a support role find any peer pressure to use LLMs?


I also do it the old fashioned way. There was a point in time where I did try to use LLMs and I noticed they were severely rotting my brain after attempting to problem solve without using one. It wasn’t even a difficult problem either. My brain has started to regenerate since swearing off AI and I’ve also noticed that solving a problem without AI gives me more good happy brain chemicals. It’s just so much nicer to think for yourself than to have the machine spit slop at you.
What I try to do is the classic search the problem in an abstract manner but use LLM instead of web search. Once it predicts the output, I search the documentation to understand what the function is, return type etc…
Finally write a version of code that works, what an LLM helps with is, it makes abstract searching easier which means I can’t use the same for searching facts (which unfortunately many people do and argue that’s is the truth)
I find the LLMs are only good at the easy stuff anyway.
You would be correct. They hallucinate anywhere from 5-50%+ of the time depending on task complexity and there’s literally no way to make them stop. Relying on them for information is genuinely akin to asking a paranoid schizophrenic living in your pocket what they think you should do.
For me the sweet spot is decisions that are trivial once you have the information at hand, but collecting the information is painful. Anything that requires strategic thinking I’ll do myself because what is your edge if you only take average strategic decisions?