LLMs are a tool with vanishingly narrow legitimate and justifiable use cases. If they can prove to be truly effective and defensible in an application, I’m OK with them being used in targeted ways much like any other specialised tool in a kit.
That said, I’m yet to identify any use of LLMs today which clears my technical and ethical barriers to justify their use.
My experience to date is the majority of ‘AI’ advocates are functionally slopvangelical LLM thumpers, and should be afforded respect and deference equivalent to anyone who adheres to a faith I don’t share.
The percentage of sociopaths involved with creating a society should never be greater than zero.
Corporations are the only ‘persons’ which should be subjected to capital punishment, but billionaires should be euthanised through taxation.