

I find it more useful doing large language transformations and delving into unknown patterns, languages or environments.
If I know a source head to toe, and I’m proficient with that environment, it’s going to offer little help. Specially if it’s a highly specialized problem.
Since SVB crash there have been firings left and right. I suspect AI is only an excuse for them.
You have to get familiar with the codebase at some point. When you are unfamiliar, in my experience, LLMs can provide help understanding it. Copying large portions of code you don’t really understand and asking for an analysis and explanation.
Not so far ago I used it on assembly code. It would have taken ages to decipher what it was doing by myself. The AI sped up the process.
But once you are very familiar with a established project you had work a lot with, I don’t even bother asking LLMs anything, as in my experience, I come up with better answers quicker.
At the end of the day we must understand that a LLM is more or less an statistical autocomplete trained on a large dataset. If your solution is not on the dataset the thing is not going to really came up with a creative solution. And the thing is not going to run a debugger on your code either, afaik.
When I use it the question I ask myself the most before bothering is “is the solution likely to be on the training dataset?” or “is it a task that can be solved as a language problem?”