Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
I’m no fan of Greg Egan’s fiction but I am a fan of him pissing of rats:
https://www.lesswrong.com/posts/EbqJfCz9qvfptNbCQ/an-angry-review-of-greg-egan-s-didicosm
also
https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (shared in a comment to the above)
Here’s the short story that pissed Zack off: https://asimovs.com/wp-content/uploads/2025/03/DeathGorgon_Egan.pdf
Friend of Ziz and cofounder of the ‘rationalist fleet’ pops up out of the woodwork trying to clear Ziz’s name
I find myself noticing things rather detached from the typical Ziz funnybusiness more strongly than I notice the stuff about that whole situation.
“I’m Gwen Danielson, a neuroscientist and bioengineer, who decided as a child that I would end Death (and bring people back if I could) and that I would become a dragon and help generally facilitate a fantastical transhumanist future.”
“I dream of non-Euclidean geometries, of countless worlds visible and accessible in the daytime sky, of competent infrastructure, of soul forges continually working to bring back the dead… I dream of reaching through warps in the spacetime fabric to save the dying across time”
“Signed, the dragon of creation Creatrei (cree-AH-trey) also known as Gwen Danielson or as Char and Astria (when referring to my hemis as distinct individuals)”
The reactions are fun. “This post is not actually doing a good job of making me trust you and think this conversation is safe to have[1], and I notice that as I am saying this that I am afraid that this will now somehow result in someone trying to murder me in my sleep”
Ah yup, that is definitely the type of person who’s deeply attracted to cults.
I’m Gwen Danielson, a neuroscientist and bioengineer, who decided as a child that I would end Death
thiel jumpscare
The back-and-forth between Gwen and LessWrong commenters is getting spicy. This definitely deserves a top-level post on SneerClub.
Habryka’s all, “Dammit, why do you have to come here and remind everyone where the Zizians came from?”
EDIT: This person also seems to have no concept of the finality of death, which might explain why the Zizians were so murdery.
Semi-topical, there’s a new Local 58 and even the moon demon hates AI, I guess.
Two tech-related links for today, both relating to fascism.
First, tante has a new blogpost about AI being overtly fascist in nature, which he’s also posted to LinkedIn, seemingly for kicks. (Found on red site, too)
Second, the Nazi laptop company has sent a second pre-release laptop to DHH, showing they have not changed at all since they went full fash six months ago:

Tante nails it again. No notes.
That he has a LinkedIn finally explains how I first heard of him. A liberal but very startup/hustle culture brained colleague shared an anti-blockchain thing on the Slack. Always wondered how she stumbled on a comm(o|u)nist tech critic, but it must’ve been LI.
Is there a more generalized form of “weird hill to die on but at least you’re dead”? Because a new-to-me way for too-rich people to end their gullible lives has emerged https://www.technologyreview.com/2026/03/30/1134780/r3-bio-brainless-human-clones-full-body-replacement-john-schloendorn-aging-longevity/
https://russwilcoxdata.substack.com/p/and-the-alignment-problem-what-chinas
In June 2025, Zhao Tingyang gave a talk at Tsinghua’s Fangtang Forum. The edited transcript ran in The Paper on July 4 under the title “人工智能的伦理与思维之限” (The Ethical and Thinking Limits of AI). Near the end, Zhao wrote this:
“What requires more reflection is that attempting to ‘align’ AI with human nature and values actually contains a risk of human species suicide. Human nature is selfish, greedy, and cruel. Humans are the most dangerous biological species. Almost all religions demand the restraint of human desire; this is no accident. AI aligned with human values may well become a dangerous subject by imitating humans. Originally, AI does not possess the selfish genes of carbon-based life, so AI is actually closer to the legendary ‘human nature is fundamentally good’ kind of existence, whereas human nature is not ‘fundamentally good.’” The alignment paradigm treats human values as the target AI should conform to. Zhao is arguing the target is the danger. An AI aligned to human values inherits the specific features of human judgment that Zhao says have produced the record of human harm. The paradigm is not incomplete. It is pointed the wrong way.
Zhao’s argument has developed across CASS, The Paper, and Wenhua Zongheng from late 2022 through 2025, from a provocative aside into a sustained critique of the alignment paradigm. In the same period, the English-language alignment and AI ethics literature produced no substantive engagement. No citations. No rebuttal. No naming. Zhao is a member of the Chinese Academy of Social Sciences Institute of Philosophy, author of the Tianxia framework, and one of the most cited philosophers working in Chinese today.
I need to think on this a little more, wasn’t on my radar.
In the same period, the English-language alignment and AI ethics literature produced no substantive engagement. No citations. No rebuttal.
Wow it’s almost like alignment and AI ethics studies is less a serious academic field and more like a prank capital likes to play on consumers.
But I also think Zhao Tingyang’s take that alignment will make AI evil because people are evil falls too much into the the-people-deserve-to-be-disempowered totalitarian state funny business side of things to be especially influential down these parts.
To be fair, while I’m not familiar with the discourse in China I know a lot people consider (rightly) “alignment” as a framing to be a red flag for cranks and rats. It’s not that surprising that this attitude hasn’t been getting much recognition when the marketing departments of ai companies has been more engaged on that subject than serious academics.
Habryka doesn’t have time to write all the crazy shit he’s mulling on, so he offers a summary.
https://www.lesswrong.com/posts/MqgwHJ93pJpaeHXs6/posts-i-don-t-have-time-to-write
Do you enjoy living in a society that takes fire safety seriously? Sucks to be you, I guess:
- Fire codes are the root of all evil
How about we just make all the mosquito nets flammable. That’s effective altruism!
Also Switzerland is a libertarian paradise apparently.
I think building codes and zoning reform are good topics to get into in rich English-speaking countries but you have to 1) learn from actual experts not x.com/wiseAss1488, and 2) engage in local politics and policy and not just post to nerds around the world.
Fire codes are the root of all evil
Ah somebody got told by their landlord not to do something. (I remember our student housing landlord (a big org) was regularly claiming ‘fire codes’ as an excuse to get rid of stuff in semi public areas. The actual fire codes didn’t demand this btw, it was just the excuse they used to stop students from filling everything with random trash).
The fire code thing really is an excellent example of LessWrong Brain. Fire truck drivers insist on needlessly large trucks (no citation) which makes roads 30% wider than they would otherwise be (no citation) which has “probably” “non-trivially” contributed to larger cars (no citation) leading to enough additional road fatalities to cancel out the lives saved by stricter fire codes (no citation).
The LessWrong Brain argument starts with a deliberately contrarian conclusion and proves it with a Rube Goldberg chain of logical syllogisms. Of course, citations are strictly optional, and they are free to misinterpret them as they see fit. The only real standard of each claim is “looks good to me”, but you are supposed to be impressed that they managed to string a dozen of them together to reveal some shocking, deep truth of the world that nobody else knows about. The AI 2027 nonsense is an infamous example of this.
He uses the word “fermi” which is cult jargon based on Fermi estimation, a.k.a. guessing shit with back-of-the-envelope calculations. Not exactly what you want if you want to convince people to reform fire codes, especially if you have zero citations for anything.
I guess people just aren’t rational enough, and the only reason the fire codes are so irrational is because people are emotional about fire codes. Firefighters are apparently revered as heroes, when it is the LWers who should be the heroes. After all, firefighters merely save people from fires, while LWers buy multimillion dollar mansions to talk about saving quadrillions of hypothetical people from hypothetical basilisks!
rationalism is when i pull five numbers out of my ass and multiply them together
Yeah but never pull 9 numbers out of your ass, that would make you too smart and they will tell the gov to drone strike you.
Unfortunately there are a few things that make courts pretty tricky to implement in practice for things like the rationality, AI safety and EA communities. Badly implemented courts also can just make things worse by creating a clear target for attack and pressure. Seems very tricky, but probably we should have more courts (or maybe not, I would need to write the post to figure it out).
yesss yesss lesswrong people’s tribunals and struggle sessions let’s gooooooooo
in b4 “Committee for AI Safety” seizes control and executes people who are too smart with a guillotine.
might LWers be the real Pol Potists? Read on to find out!
They develop a special g-meter to find people who could potentially create more efficient gpus and send them to the gulags.
But despite well-documented claims to genius IQs, somehow the billionaire set ends up not on the chopping block.
Looks like Mythos didn’t catch this one:
Anthropic secretly installs spyware when you install Claude Desktop
Whoopsie!
It’s fine, spyware is only a risk when it’s bad people’s spyware. It’s totally fine when it’s Anthropic™-approved spyware!
As for Mythos catching things, maybe they should have used Mythos on their very own Claude Code considering that it has hilariously obvious security exploits, such as this one which inserts an arbitrary string into a shell command. Actually, never mind I don’t see anything wrong here, maybe we should burn another $20k in electricity running Mythos on it again to find out.
anthropic is the most moral ai company in the universe
Well it does have the secret “Any attempt to arrest a senior officer of OCP results in shutdown” derective.
guys

If it’s “agentic,” doesn’t that imply it smokes weed for you
I’m sorry, I think I need to believe that this is taking the piss in order to be able to function. It can’t be real (It’'s definitely real).
Oh God I read their FAQ and it looks like the whole concept is to gamify smoking weed because if there’s one problem with weed it’s that it’s not addictive enough on its own. I mean the actual concept is to try and smash enough hip tech buzzwords together to extract some amount of the dwindling venture capital continuing to slosh around the valley, but if it actually happens the thing it’s going to do is take all the addictionware tactics that app developers have developed and bring them to bear on promoting drug use.
This is a serious blow to coolness, from which not even drugs will easily recover.
Wake me up when there’s an agentic butt plug
You made me look and now you all have to know there’s a library for butt plugs (written in Rust) that has LLM generated code in it:
https://github.com/buttplugio/buttplug#inclusion-of-llm-generated-code
This is how I learn that buttplug.io has fallen
I hurt myself today… to see if I still feel…
I have mixed feelings about speaking things into existence
Considering the amount of weird hentai on the internet that cannot end well.

Agentic Ripz is my new jam band.
New random positivity thread:
Reminder to update your github settings if you don’t want AI to train on it, before 24 april.
Habryka @ LW takes a break from reinventing Western civilization from first Rat principles to advocate a RETVRN to incandescent lighting:
Eventually, in most of the western world outside of the US, incandescent lightbulbs were literally banned to promote energy saving policies.
This was the greatest uglification in history. Within two decades, much of the world that was previously filled with beautiful natural-feeling light started feeling alien, slightly off, and uncomfortable, and societal stigma around energy-saving policies prevented people from really doing anything about it.
Who TF was using LED lights for indoor lighting in the 1990s? Compact flourescents were the lightbulb replacement in the oughties.
And how TF do you write that post without using the phrase “sensory sensitivity” and citing some women who know they have autism? Once you know you are more sensitive to your environment than allistics, you can start to experiment with interventions.
Dramatic fascistic “RETVRN” language and focus on aesthetics aside, my wife and I actually dug into some of this lighting quality stuff a while ago and while our very good friend here does a poor job explaining it there is a definite difference in normal LEDs vs incandescent or natural light. The LED spectra is fascinating - big spikes at a couple wavelengths and nothing in between. In my experience with switching to the fancier high-CRI LEDs the difference is pretty minimal. Feels like a possible case where you don’t notice it, but your brain does. For my wife it seems to have helped reduce the incidence and severity of her crippling migraines, which is obviously more impactful. I don’t think I’d say it beautified the space or brought us back to the halcyon days of our glorious past, but that’s been huge for us all the same. The plural of anecdote is not cliche, but there’s not nothing here.
This was the greatest uglification in history.Seems like a market opportunity for some special lights/glass that recreates the natural feeling light they want. (I have some lower light leds in older style lamps and I’m having, for me, nice and cosy lighting. Only issue seems to be me getting older and my eyes getting worse with age.
jesus fucking christ this is insufferable, these guys have been masking off for so long now that they’re scraping the bottom of the barrel to find new things to do fascist signalling about
I can’t imagine the Mouse being happy about this cameo.
damn i wish i had confidence of a mediocre techbro. twitter thread suggests that there’s four copyright infringements just in these two images
e: still can’t get fingers right, lol
looks up, shrugs Everybody involved here kind of sucks, best of luck to all of them. looks back down
When we’re all too tired for “let them fight”.

For the uninformed (like myself), the odd fox out is a character from Zootopia (pic related), which is a Disney movie.
Bold move to steal from The Mouse in broad daylight, isn’t it?
You can see also his rabbit counterpart from Zootopia in a princess dress in the background of the first picture!
I assume that’s more because they used AI to lift zootopia’s art style wholesale and so that’s just how rabbits are now.
Ex-CEO, ex-CFO of bankrupt AI company iLearningEngines charged with fraud - Reuters, 17 April 2026
Prosecutors said iLearning marketed itself as an artificial intelligence-driven digital education company with an “out-of-the-box AI platform,” and claimed to earn revenue mainly by selling licenses for its educational and training platforms to customers, including healthcare companies and schools.
According to the indictment, the defendants used forged sham contracts to make it seem that iLearning’s customers were real, and used “round trip” transfers of investor and lender funds – meaning they sent money to purported customers, who then returned it to iLearning – to manufacture revenue.
At least 90% of iLearning’s $421 million of reported revenue in 2023 was fabricated, the indictment said.
I think they called this wash trading in cryptoland.
This has been getting posted repeatedly on a reddit sub I follow and it’s funny that we’re still finding Harris fans who somehow didn’t catch on for all these years that the dude is just straight up racist and hates Muslims on an individual basis. The fact that he was probably the most commonly mentioned podcaster on the slate star codex sub is a sign!
I’ll steal my general remark from one of the threads on DTG:
Sam, if you’re allowed to call the mayor an Islamist because his wife liked some posts on social media, can we call you a racist for having a notorious racist on your podcast to yammer about the woke menace?
Why? So it can be firebombed?
(don’t fire-bomb residences people)
also it’s hilarious that a 5% local tax surcharge on a place you don’t even live in is considered Stalinism















