French lawmakers Arthur Delaporte and Eric Bothorel alerted prosecutors on January 2 after thousands of non-consensual sexually explicit deepfakes were generated by Grok and shared on X. The Paris prosecutor’s office said the reports were added to an existing investigation into X, noting the offense carries penalties of up to two years in prison and a €60,000 fine.
Meta made headlines for trying to poach elite researchers from competitors with offers of $100mn sign-on bonuses. “The future will say whether that was a good idea or not,” LeCun says, deadpan.
LeCun calls Wang, who was hired to lead the organisation, “young” and “inexperienced”.
“He learns fast, he knows what he doesn’t know . . . There’s no experience with research or how you practise research, how you do it. Or what would be attractive or repulsive to a researcher.”
Wang also became LeCun’s manager. I ask LeCun how he felt about this shift in hierarchy. He initially brushes it off, saying he’s used to working with young people. “The average age of a Facebook engineer at the time was 27. I was twice the age of the average engineer.”
But those 27-year-olds weren’t telling him what to do, I point out.
“Alex [Wang] isn’t telling me what to do either,” he says. “You don’t tell a researcher what to do. You certainly don’t tell a researcher like me what to do.”
OR, maybe nobody /has/ to tell a researcher what to do, especially one like him, if they've already internalized the ideology of their masters.
The mods were heavily downvoted and critiqued for pulling the rug from under the community as well as for parallelly modding pro-A.I.-relationship-subs. One mod admitted:
"(I do mod on r/aipartners, which is not a pro-sub. Anyone who posts there should expect debate, pushback, or criticism on what you post, as that is allowed, but it doesn’t allow personal attacks or blanket comments, which applies to both pro and anti AI members. Calling people delusional wouldn’t be allowed in the same way saying that ‘all men are X’ or whatever wouldn’t. It’s focused more on a sociological issues, and we try to keep it from devolving into attacks.)"
A user, heavily upvoted, replied:
You’re a fucking mod on ai partners? Are you fucking kidding me?
It goes on and on like this: As of now, the posting has amassed 343 comments. Mostly, it's angry subscribers of the sub, while a few users from pro-A.I.-subreddits keep praising the mods. Most of the users agree that brigading has to stop, but don't understand why that means that a sub called COGSUCKERS should suddenly be neutral to or accepting of LLM-relationships. Bear in mind that the subreddit r/aipartners, for which one of the mods also mods, does not allow to call such relationships "delusional".
The most upvoted comments in this shitstorm:
internet comment etiquette with erik just got off YT probation / timeout from when YouTube's moderation AI flagged a decade old video for having russian parkour.
He celebrated by posting the below under a pipebomb video.
Hey, this is my son. Stop making fun of his school
project. At least he worked hard on it.
unlike all you little fucks using AI
to write essays about books you don't
know how to read. So you can go use AI
to get ahead in the workforce until your
AI manager fires you for sexually
harassing the AI secretary. And then
your AI health insurance gets cut off so
you die sick and alone in the arms of
your AI fuck butler who then immediately
cremates you and compresses your ashes
into bricks to build more AI data
centers. The only way anyone will ever
know you existed will be the dozens of
AI Studio Ghibli photos you've made of
yourself in a vain attempt to be
included. But all you've accomplished is
making the price of my RAM go up for a
year. You know, just because something
is inevitable doesn't mean it can't be
molded by insults and mockery. And if
you depend on AI and its current state
for things like moderation, well then
fuck you. Also, hey, nice pipe bomb, bro.
Thanks for posting. The author is provocative for sure, but I found he also wrote a similar polemic about veganism, kinda hard for me to situate it. Might fetch one of his volumes from the stacks during a slow week, probably would get my name put on a list though.
Waterfox lore - it got acquired by System1 of "Guy Who Runs Three Companies Called Fidelity But Not The Fidelity You Know Probably Doesn’t Care That There’s Already A Company Called System1 That Does That Same Thing As The System1 His SPAC Is Buying. Just saying." 1 and then went private again 2, presumably they bought it back after the stock predictably tanked. Subprime adtech is a strange place.
IMHO should just bring back iceweasel, but what do i know.
Backgammon is "easier" than chess or go, but it has dice, so it not (yet) been completely solved like checkers. I think only the endgame ("bearing off") has been solved. The SOTA backgammon AI using NNs is better than expert humans but you can still beat it if you get lucky. XG is notable because if you ever watch high stakes backgammon on youtube, they will run XG side by side to show when human players make blunders. That's how I learned about it anyway.
After 25 years, it is time for us to pass the torch to someone else. Travis Kalanick, yes the Uber founder, has purchased Gammonsite and eXtreme Gammon and will take over our backgammon products (he has a message to the community below)
In mid-November, I agreed to an experiment. Anthropic had tested a vending machine powered by its Claude AI model in its own offices and asked whether we’d like to be the first outsiders to try a newer, supposedly smarter version.
Whats that word for doing the same thing and expecting different results?
Miller’s team also recently used software from startup StackAI to develop an AI-powered app that writes letters of recommendation, saving faculty members time. Faculty type basic details about a student who has requested a letter, such as their grades and accomplishments, and the app writes a draft of the full letter.
AI is “one of those things that you might worry could dehumanize the process of writing recommendation letters, but faculty also say that process [of manually writing the letters] is very labor intensive,” Miller said. “So far they’ve gotten a lot out of” the new app.
Anyone using this thing should be required to serve on the admissions committee. LoRs aren't for generic B+ students that you don't even remember, just say no.
I googled stackai, saw their screenshots and had ptsd flashbacks of mid 2000s alteryx. why do we keep reinventing no-code drag-and-drop box-and-arrow crap.
Throughout my nearly three decades in family medicine across a busy rural region, I watched the system become increasingly burdened by administrative requirements and workflow friction. The profession I loved was losing time and attention to tasks that did not require a medical degree. That tension created a realization that has guided my work ever since: If physicians do not lead the integration of AI into clinical practice, someone else will. And if they do, the result will be a weaker version of care.
I feel for him, but MAYBE this isn't a technical issue but a labor one; maybe 30 years ago doctors should have "led" on admin and workflow issues directly, and then they wouldn't need to "lead" on AI now? I'm sorry Cerner / Epic sucks but adding AI won't make it better. But, of course, class consciousness evaporates about the same time as those $200k student loans come due.
Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog h/t naked capitalism