The term "reasoning model" is as gaslighting a marketing term as "hallucination". When an LLM is "Reasoning" it is just running the model multiple times. As this report implies, using more tokens appears to increase the probability of producing a factually accurate response, but the AI is not "reasoning", and the "steps" of it "thinking" are just bullshit approximations.
I think linkrot is happening much faster here than on reddit, even if just counting deleted posts.
Are you sure? Are lemm.ee posts showing as deleted for you? It looks like the copies of anything posted to lemm.ee still exist on the instances that it was federated with. Try this link !animation@lemm.ee, I am pretty sure it should still work on your instance.
I know he's just selling the platform to AI companies, but it's an odd take considering they've been moving away from being a message board and towards being just another content feed for years now.
KDE Plasma (very Windows-ey) and it is "immutable" which means you can't break it.
Someone else said Kubuntu which aesthetically will look the same and is also a good choice but if you want to start with a "just works" I recommend an immutable distro.
You've got the right idea, uploading content on old internet was (largely) about putting yourself out there hoping to find like minded people to connect with. All those "quirky, weird" websites people nostalgize over weren't seeking clout, they were seeking connection.
I think it's healthy to deprogram ourselves from the idea that "followers" are the goal of creating content. If an influencer is making "art" with the express purpose of getting followers that they can leverage into marketing deals, then are they really making art? If the goal is to make money then... that's just a job, imo.
I love the argument that AI will "fix global warming". We already know how to fix global warming! We've known for decades! Global warming is not some incalculable mystery!
"EEE" doesn't really make sense in this context, and even if there was some way for Meta to affect non Meta-owned instances- ActivityPub is an open protocol and Meta is allowed to use it however they want.
The term "reasoning model" is as gaslighting a marketing term as "hallucination". When an LLM is "Reasoning" it is just running the model multiple times. As this report implies, using more tokens appears to increase the probability of producing a factually accurate response, but the AI is not "reasoning", and the "steps" of it "thinking" are just bullshit approximations.