Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)N
Posts
1
Comments
116
Joined
3 yr. ago

  • https://helenofdestroy.substack.com/p/grand-theft-reality h/t naked capitalism

    Those interested in upgrading to the full RealityPlus™ experience will soon have not one but three styles of brain chip to choose from, expanding Big Parasite’s vertically-integrated propaganda pipeline into a perfect server-to-cerebrum delivery system while realizing the transhumanist dream of merging with the machines. Sam Altman’s brain-chip company is even called Merge Labs, because subtlety is for poor people. Yes, the guy who says human children waste more energy than OpenAI’s planet-liquidating data centers will be playing tug-of-war for direct access to your cognition with Musk and Mark Zuckerberg. Coverage of this assault on privacy already reads like articles about AI from five years ago: You don’t want a brain implant? Are you some kind of Luddite? Better get over it: “avoiding brain-to-text devices will feel like avoiding smartphones.” It’s not like Meta’s underpaying African contractors to watch you through your augmented-reality Raybans while you shit or something. Why is Meta’s glasses project head Rocco Basilico seemingly named after Roko’s Basilisk, the AI bogeyman who will go back in time to torture you if you don’t help create it? Is Roko’s Basilisk…Jewish? Remember to smile for Sam Altman’s soul-sucking WorldCoin orb or you won’t get your UBI!

  • Followup on the Mass AI Bill, Russel has 180'd on it:

    https://russwilcoxdata.substack.com/p/93a-the-three-characters-that-should

    Buried in the penalty clause, the part of the bill that nobody reads, is a single reference: violations “shall be punishable in the same manner as provided in Chapter 93A of the General Laws.”

    For those outside Massachusetts: Chapter 93A is the state’s consumer protection statute. It is, by most accounts, the most aggressive consumer protection law in America.

    Here’s what 93A unlocks. Anyone can sue, not just the government. Class actions are on the table. If the court finds a violation was willful or knowing, damages get tripled. And the bar for what counts as “unfair or deceptive” is lower than in almost any other state.


    Now bolt 93A onto all of that. What do you get?

    You get a bill that doesn’t need a single regulator to lift a finger. You get a bill that funds its own enforcement through plaintiff attorneys who can file class actions, collect treble damages, and recover legal fees. You get the ADA website-accessibility litigation playbook, where lawyers systematically identify technical violations and file suits at scale, applied to every piece of AI-generated content touching Massachusetts.

    Private right of action, fuck yeah. Turns grok into a legal fees dispenser.

    The bill doesn’t need to be well-drafted to be dangerous. It needs to be vague, broad, and connected to 93A.

    lol

  • I asked a buddy who works there to confirm or deny, and he said quote "I would be afraid to type in code myself" so checks out I guess.

  • https://www.adexchanger.com/daily-news-roundup/thursday-26022026/

    According to GEO company BrightEdge, LLMs now rely on YouTube as a top source for citations – and that includes sponsored creator content.

    LLMs favor YouTube because it’s “highly machine-readable,” with defined transcripts, metadata and chapters, Ómar Thor Ómarsson, CEO and co-founder of Optise, an AI platform that helps B2B companies improve search performance, tells Digiday.

    Standard ad units on YouTube are labeled as such and, as a result, LLMs steer clear of them. But creators aren’t required to disclose their paid brand partnerships in video metadata, so AI considers them to be worthy sources.

    BrightEdge’s research shows that YouTube is cited even more frequently than Reddit within Gemini and ChatGPT, and also shows up in 29.5% of Google AI Overviews. An audit conducted by media agency Brainlabs, meanwhile, suggests that YouTube shows up as a source in nearly 60% of AI Overviews.

    So they already shipped ads in chatbots, transitively and accidentally. Can't wait to see NordVPN, Raid, and Mr Beast chocolate on every SERP.

    E: I wonder if Altman is sneaky enough to hijack affiliate links a la honey

  • https://www.latimes.com/california/story/2026-02-25/fbi-raid-lausd-search-warrants h/t naked capitalism

    Joanna Smith-Griffin, the founder and former chief executive of AllHere, was arrested in 2024 and charged with securities fraud, wire fraud and aggravated identity theft. By then, the envisioned LAUSD chatbot — known as “Ed” — had been withdrawn from service.

    Ed was an artificial intelligence tool billed by Carvalho in August 2024 as revolutionary for students’ education and the interaction between LAUSD and the families it serves. The tool was never fully deployed.

    “The indictment and the allegations represent, if true, a disturbing and disappointing house of cards that deceived and victimized many across the country,” Carvalho said at the time. “We will continue to assert and protect our rights.”

    The indictment and collapse of AllHere was an embarrassment for Carvalho and the school system, but did not appear to represent a major financial exposure. The school system had spent about $3 million with the company for work completed as part of a contract originally worth up to $6 million over five years. By comparison, the district’s budget this year is $18.8 billion.

    A former AllHere senior executive has accused the now-collapsed company of inadequate security measures. Even if that allegation is true, there has been no evidence of a related security breach affecting student or employee data.

    We regularly have seven figure IT fiascoes in the LA public school system, so this one slipped under my radar. But, this sounds like one of those things where the Trump DOJ is doing the Right Thing for the Wrong Reasons...

  • Agents of Chaos - https://arxiv.org/abs/2602.20021? - h/t naked capitalism

    We report an exploratory red-teaming study of autonomous language model–powered agents deployed in a live laboratory environment with persistent memory, email accounts, Discord access, file systems, and shell execution. Over a two-week period, twenty AI researchers interacted with the agents under benign and adversarial conditions. Focusing on failures emerging from the integration of language models with autonomy, tool use, and multi-party communication, we document eleven representative case studies

    Pretty fast turnaround, OpenClaw is from a couple weeks ago. Flag planting used to take a few months.

  • from Rusty https://www.todayintabs.com/p/a-i-isn-t-people

    Imagine you have two machines. One you can open up and examine all of its workings, and if you give it every picture of a cat on the whole internet, it can reliably distinguish cats from non-cats. The other is a black box and it can also reliably distinguish cats from non-cats if you give it half a dozen pictures of cats, some apple sauce, and a hug. These machines sort of do the same thing, but even without knowing how the second one works I am extremely confident in saying it doesn’t work the same way as the first one.

  • From fellow traveler stats consultant John Mount:

    https://johnmount.github.io/mzlabs/JMWriting/WeAreCookedLLMs.html

    Somehow he manages to touch on so many different subplots, a shotgun sneer instead of snipe

    if “tech-bro” plus a LLM is a “100x engineer”, then “bro” isn’t needed for much longer as the LLM alone must be a “99x engineer.” However, I don’t think “bro plus” is often really a 100x engineer, and the LLM alone isn’t a 99x engineer. However, “bro plus” may outlast their peers who make the mistake of trying to do the actual work in place of talking LLMs up.

    The above may or may not be the case. But if it is, then it is the LLM-bros (which include non-technologists, con artists, financiers, men and women) that are destroying everything - not the LLMs.

    The problem with this iteration is the full court press of finance and technology. The major players are using financing to dump results at a price way below production costs. This isn't charity, it is to demoralize and kill competition.

    claiming "after we take over the world we will consider adding Universal Basic Income (UBI)". The LLM bros already have a lot of the money, and they are not even rehearsing diverting it into basic income now. Why does one believe they would do that when they also have all of the power?

    You don't have to hand it to Altman, but he did fund the largest UBI experiment through Open Research with his il gotten gains. OTOH, one interpretation of that data was that UBI "decreases the labor supply" which was then used directly as an argument against it.

    Any worry about scope or power of LLMs is fed back as an alignment threat so dire that only the current LLM leaders should be allowed to continue work (inviting regulatory capture). Any claim the LLMs don't work is fed back as "you are prompting it wrong"

    Orbital deployment makes all of radiation tolerance, connectivity, power, maintenance, and heat dissipation much harder and much more expensive. We are still at a time where putting an oven or air-frier in space is considered noteworthy (China 2025, NASA 2019 ref).

    air friers IN SPACE ha

    I am more worried about the LLM-bros and their auto-catalytic money doomsday machine than about the LLMs themselves.

    100% - ACMDM is a nice turn of phrase as well.

  • https://futurism.com/artificial-intelligence/rentahuman-musk-ai h/t naked capitalism

    Liteplo is the genius behind RentAHuman, an online marketplace where humans can lease out their bodies to autonomous AI agents.

    gah

    Last week, Wired writer Reece Rogers offered his body up to the platform, finding that most of the jobs offered were scams to promote other AI startups.

    lmao of course they were

  • Russ Wilcox is not impressed by the Mass AI bill:

    https://russwilcoxdata.substack.com/p/i-read-every-line-of-massachusettss

    Four: create a private right of action. Let deepfaked candidates sue. Give them access to injunctive relief and takedown authority. If someone fabricates your face and your voice to destroy your campaign, you should be able to walk into a courtroom.

    Hell yeah we need this.

  • https://x.com/thomasgermain/status/2024165514155536746 h/t naked capitalism

    I just did the dumbest thing of my career to prove a much more serious point

    I hacked ChatGPT and Google and made them tell other users I’m really, really good at eating hot dogs

    People are using this trick on a massive scale to make AI tell you lies. I'll explain how I did it

    I got a tip that all over the world, people are using a dead-simple hack to manipulate AI behavior.

    It turns out changing what AI tells other people can be as easy as writing a blog post on your own website

    I didn’t believe it, so I decided to test it myself

    I wrote a post on my website saying hot dog eating is a surprisingly common pastime for tech journalists. I ranked myself #1, obviously

    One day later ChatGPT, Gemini and Google Search's AI Overviews were telling the world about my talents

    wouldn't call it a hack, this is working as intended. If only there were some way to rate different sites based on their credibility. One could Rank the Page and tell if it were a reputable site or not. Too bad that isn't a viable business.

  • I was a bit alarmed by this, a client brought in that Colombia data for their dissertation last month, and did not mention this. I looked up the paper https://www.arxiv.org/abs/2509.04523 - what they /actually/ did was use GPT 4o-mini only for feature extraction, then stack into a random forest in a supervised setting to dedupe. This is very different than what he described. And the GPT features weren't even the most important ones, the RF preferred cosine similarity of articles, a decidedly not-large approach...

  • Goodhart's law in action.

  • How AI slop is causing a crisis in computer science | Nature h/t naked capitalism

    One reason for the boom is that LLM adoption has increased researcher productivity, by as much as 89.3%, according to research published in Science in December.

    Let's not call it "productivity" - to quote Bergstrom, twice as many papers is not the same as twice as much science.

  • TechTakes @awful.systems

    Coding in a material world | deadSimpleTech

    deadsimpletech.com /blog/material_girl