They're bullshit machines. They are specifically engineered to sound confident, with zero regard for correctness. They're inherently unsuitable for anything that doesn't need to be purposefully unreliable.
The problem is that LLMs aren't actually automating away any significant amount of work, just transforming it for some unlucky few (developing -> debugging slop), and fooling c-suites into laying off people for the funds to invest in the empty promises.
That's the neat part ...