LLMs can’t perform any of those functions, and the output from tools infected with them and claim to, can intrinsically only ever be imprecise, and should never be trusted.
LLMs are a tool with vanishingly narrow legitimate and justifiable use cases. If they can prove to be truly effective and defensible in an application, I’m OK with them being used in targeted ways much like any other specialised tool in a kit.
That said, I’m yet to identify any use of LLMs today which clears my technical and ethical barriers to justify their use.
My experience to date is the majority of ‘AI’ advocates are functionally slopvangelical LLM thumpers, and should be afforded respect and deference equivalent to anyone who adheres to a faith I don’t share.
Financial obesity is an existential threat to any society that tolerates it, and needs to cease being celebrated, rewarded, and positioned as an aspirational goal.
Corporations are the only ‘persons’ which should be subjected to capital punishment, but billionaires should be euthanised through taxation.
Financial obesity is an existential threat to any society that tolerates it, and needs to cease being celebrated, rewarded, and positioned as an aspirational goal.
Corporations are the only ‘persons’ which should be subjected to capital punishment, but billionaires should be euthanised through taxation.
Financial obesity is an existential threat to any society that tolerates it, and needs to cease being celebrated, rewarded, and positioned as an aspirational goal.
Corporations are the only ‘persons’ which should be subjected to capital punishment, but billionaires should be euthanised through taxation.
I’ve tried, but I can’t unsee my initial identification of filled nappy and used tampons.