IIRC during covid they did experiment with liquid oxygen exchange through the intestinal wall for patients whose lungs were so wrecked normal ventilators weren't sufficient. So you could maybe engage in some anal breathing if you were to get a super-oxygenated fluid enema, but its not something the unassisted human body can do. Or tries to do, for that matter.
Its exactly the kind of thing you'd expect would be the product of AI, but it actually came before AI. I think a lot of it was procedurally generated though, using scripts to control 3D software and editing software, so different character models could be used in the same scenes and different scenes could be strung together to make each video.
I think a similar thing happens with those shovelware Android games. There's so many that are just the same game with (incredibly poorly done) asset swaps that I think they must just make a game once and then automatically generate a thousand+ variations on it.
They might be generated for the purpose of creating adversarial images that appear relatively normal to a human but confuse AI image recognition. You can see an additional layer of weird patterns on top of them too.
IMO this is not a particularly annoying captcha though. I've had some where the text instructions are so indirect and strangely phrased that it's not even clear what you're supposed to do.
That's good to hear. It continuously amazes me how often search bars in some pieces of software manage to be worse than ctrl-f in a plaintext document.
Hallucinations are an intrinsic part of how LLMs work. OpenAI, literally the people with the most to lose if LLMs aren't useful, has admitted that hallucinations are a mathematical inevitability, not something that can be engineered around. On top of that, its been shown that for things like mathematical proof finding switching to more sophisticated models doesn't make them more accurate, it just makes their arguments more convincing.
Now, you might say "oh but you can have a human in the loop to check the AIs work", but for programming tasks its already been found that using LLMs makes programmers less productive. If a human needs to go over everything an AI generates, and reason about it anyway, that's not really saving time or effort. Now consider that as you make the LLM more complex, having it generate longer and more complicated blocks of text, its errors also become harder to detect. Is that not just shuffling around the necessary human brainpower for a task instead of reducing it?
So, in what field is this sort of thing useful? At one point I was hopeful that LLMs could be used in text summarization, but if I have to read the original text anyway to make sure that I haven't been fed some highly convincing falsehood then what is the point?
Currently I'm of the opinion that we might be able to use specialized LLMs as a heuristic to narrow the search tree for things like SAT solvers and answer set generators, but I don't have much optimism for other use cases.
It wouldn't be the first time people talked like that and it won't be the last.
Hell, it wouldn't be the first time Half Life 3 was in development. They went through like 4 different concepts, with the gun upgrading in one of them ending up in Half Life: Alyx.
If you're a turtle or a sea cucumber maybe.
IIRC during covid they did experiment with liquid oxygen exchange through the intestinal wall for patients whose lungs were so wrecked normal ventilators weren't sufficient. So you could maybe engage in some anal breathing if you were to get a super-oxygenated fluid enema, but its not something the unassisted human body can do. Or tries to do, for that matter.