

This actually gives me hope that we can poison the datasets pertaining to any sufficiently narrow technical topic.


This actually gives me hope that we can poison the datasets pertaining to any sufficiently narrow technical topic.


“Scientists invented a fake disease. AI told people it was real”
https://www.nature.com/articles/d41586-026-01100-y
But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.
The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.
The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.


LLM capabilities have not improved at all in terms of producing meaningful science in the last year or two, but their ability to produce meaningless science that looks meaningful has wildly improved. I am concerned that this will present serious problems for the future of science as it becomes impossible to find the actual science in a sea of AI slop being submitted to journals.
https://www.reddit.com/r/Physics/comments/1s19uru/gpt_vs_phd_part_ii_a_viewer_reached_out_with_a/


I aired some Reviewer #2 grievances in the bsky comments:
https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c
“Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.””
As a physicist, I have never pressed F to doubt harder.
“In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents.” To the best of my knowledge, these suggestions were never evaluated by any other researchers.
(The original paper was published as a “comment”: https://www.nature.com/articles/s42256-022-00465-9)
Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.
https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643
“In a 2025 study, ChatGPT passed the test more reliably than actual humans did.”
If this is referring to Jones and Bergen’s “Large Language Models Pass the Turing Test”, that’s a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.
“A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win”
Which researchers?
(Hint: Eliezer Yudkowsky is not a researcher.)
AI: “I will convince you to let me out of this box”
Humanity (wringing hands): “Oh, where is our savior? Who will stand fast in the face of all entreaties?”
Bartleby the Scrivener: hello
“…a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor.”
Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.
https://repository.uantwerpen.be/docman/irua/371b9dmotoM74
“In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” … one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening.”
Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; “posted” is not the same as “published”. And claims in this area are rife with criti-hype:
https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/
Oh, right, the “Future of Life Institute”. Pepperidge Farm remembers:
“In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper.”
https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism
“Tegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro … has written articles for the site in the past.”
https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/


In practical terms, what can they do? Add instructions to say “You will not generate spaghetti code that will humilate us when real programmers see it?” Perhaps in all caps?
This is what theirnorganizarion is capable, after tremendous expense, of producing. I don’t think that bodes well for their prospects of improvement.


Truly a tool for the .COM era


DoS script
Part of me reads that and still thinks, “Oh, you mean like AUTOEXEC.BAT?”


A pretty staid-sounding law firm warns that the AI industry is partying like it’s 2007:
Lenders who originated data center loans […] have begun pooling those loans and selling tranches to asset managers and pension funds, spreading risk well beyond the original lending institutions.
Also of note:
The most basic litigation risk in AI infrastructure finance is that the revenues generated by the sector may prove insufficient to service the fixed obligations incurred to build it. The industry brought in approximately $60 billion in revenue in 2025 against roughly $400 billion in capital expenditure.
(Via.)


The thread for collecting HPMoR sneers linked to this timeline, but it’s paywalled now:
https://www.vox.com/culture/23622610/jk-rowling-transphobic-statements-timeline-history-controversy


To be clear, I don’t care about Yud picking the fandom he did at the time (apart from the cheapness of “playing on easy mode” and the blatant attempt to ride popularity for propagating his cult shit). What strikes me is the silence during the time when other people are most definitely reacting:


I have also occasionally been tempted to try and get a Goncharov thing going, where everyone collectively recalls that Tommy Berry and the Forevernight Forest got them into reading.
It was just after an ordinary afternoon tea, on an ordinary Sunday, the first cold day of autumn, when Tommy Berry discovered that Time was no longer adding up in the ordinary way.
Tommy had only managed to drink one cup of very indifferently warm tea, and eat the last plain saltine from the bottom of the bag. Everything else had been gobbled up or drunk down by his uncle Myrvold, who was rotund as a boulder and about as kind, and his step-aunt Meredith, who was thin as a snake and considerably more mean. So, yes, it was altogether quite the ordinary teatime.
Tommy had a secret, you see. In fact, he had two, a big one that he knew about and an even bigger one that was just about to fall on top of him.
His first secret was that he had a library card. He had stolen an adult’s library card. Or that is how Uncle Myrvold and Step-Aunt Meredith would have described it, if they knew.
Carruthers, who lived down the end of the lane and always yelled at Tommy to mind his hedges, and who let his dog chase Tommy and the other children, had made a big show of throwing his library card into the roadway because, he said, the library was full of immoral books. A car had then driven over it, and then a whole lorry, and then Tommy had snatched it up. Something told him that anything Carruthers hated, he should save, and anything that Myrvold and Meredith would be angry about, he should hold onto.
Tommy had heard adults say that something was “burning a hole in my pocket”. He wondered if this was what that meant. It felt like he was carrying a hot coal in the pocket of his threadbare corduroy jacket, and no one could know.
The library had a new machine. He had seen adults use it. You could go up to it, wave a book under a red laser light like at the grocery store, then show the machine your card, and it would check out the book for you. Tommy made a plan. He would slip out of the house just after tea. He would walk the five blocks to the library. He would find a book that Myrvold and Meredith and Carruthers and every other grownup would not want him to read. He would wait until the librarian was busy dealing with a whole queue of people. And then he would use the machine.
Everything went perfectly until the very last step.
There was a girl at the machine.
He had a big fat book in his hands, a book he had picked because it had “Murder” in the title and would last a long time, and there was a girl in front of him at the library machine.
“Murder at Wizard University?” she asked him, right to his face, like they had already been introduced, like they had known each other since nursery school. “That’s not a book for little kids.” His stomach dropped, right into his feet. He didn’t know that a stomach could do such a thing.
And then she tilted the stack of books she was carrying toward him, showing him the titles on their spines. “Neither are these,” she said.
And she pulled out her own library card. It was black, like a rectangle cut out of the midnight sky.
That’s all I wrote in the thread that prompted me to take a stab. Oh, I think I had decided that the girl’s name is Elfriede? And the principal of magic school is nonbinary.
“Why, of course there’s a potion for changing,” said Professor Shade. “That is what potions do. I don’t know where I’d be without it. It is ever so helpful to reach the top shelf, but on the other hand, men’s fashions haven’t been truly swank in a hundred fifty years.”


Being the kind of writer I am, whenever this comes up I am tempted to suggest ways it could have been done better. But, first, I am not glazing the work of Rowling, even indirectly, no way, no how. Fuck her for all the pain she has wrought, and fuck the whole LessWrong crew for tacitly accepting it. Second, HPMoR was cult shit all along, not meant to teach science but to sow distrust of scientists under the glossy sheen of being able to name the six quarks.


Once you commit to the idea that only your main characters have ever tried to study magic scientifically, you’re locked in to making all the rest of the magical world into dullards. (Really, no other eleven-year-olds were ever into computer programming, chemistry sets, exotic marine animals, outer space, or dinosaurs?) Or, to look at it another way, the only way you can find the premise plausible is if you’re already inclined to dismiss most of humanity as “NPCs”.


It even used to have a top-quality news section, BuzzFeed News!
Azeen Ghorayshi made her name reporting at Buzzfeed News about sexual harassment in science… and then she became one of the New York Times’s professional transphobes.


more like requie-SCAT, am i right


Carl Bergstrom notes a publicity stunt by Anthropic:
“The AI Grad Student”: A Harvard professor describes working with Claude.
Early on, he describes misconduct that would cause any student to be terminated: “It faked results, hoping I wouldn’t notice.”
But he ends the essay with “Now I’m doing 100% of my research with LLMs”.
Am I losing my mind?
Hang around for the “trust me bro, I saw it on YouTube” guy in the comments.


deleted by creator


44 comments in support, 2 in opposition, discussion closed early under the “there’s a snowball’s chance in Hell of the situation changing from here” clause:
https://en.wikipedia.org/wiki/Wikipedia:Writing_articles_with_large_language_models/RfC
The only people I trust as little as I trust the owners of corporate social media are the politicians who have decided to cash in on the moment by “regulating” them. I mean, here in progressive Massachusetts, the state house of representatives just this week passed a bill that, depending on the whims of the Attorney General, would require awful.systems to verify the ages of its users by gathering their government-issued IDs or biometrics. We are, you see, a “public website, online service, online application or mobile application that displays content primarily generated by users and allows users to create, share and view user-generated content with other users”. And so we would have to “implement an age assurance or verification system to determine whether a current or prospective user on the social media platform” is 16 or older. (Or 14 or 15 with parental consent, but your humble mods lack the resources to parse divorce laws in all localities worldwide, sort out issues of disputed guardianship, etc., etc.) The meaning of what “practicable” age verification is supposed to be would depend upon regulations that the Attorney General has yet to write.
So, yeah, as an old-school listserv nerd who had the I am not on Facebook T-shirt 15 years ago, I don’t trust any of these people.