no because this is literally in development, this isn’t some 60 year old mature tech
algorithms sure, nn for some narrow topics yep great, not the this bullshit though
there is already academic accessible research talking about LLM issues of which the major concern is hallucinations, to the point where the word bailout is starting to make the rounds in the us from these very companies
the argument is whether or not you believe this is inherent or fixable and a big focus is on the training
anyone listening to any ai company right now is a damn fool with the obvious circular vendor bullshit going on
but you do you, if the market could be trusted to be sane i’d be timing it right now
regulations are often written in blood ‘overreactions’ until not