Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)M
Posts
1
Comments
70
Joined
1 yr. ago

  • I am a better sysadmin than I was before agentic coding because now I can solve problems myself that I would have previously needed to hand off to someone else.

    more fodder for my theory that LLMs are a way to cash on the artificial isolation caused by the erosion of any real community in late stage capitalism (or to put it more simply, the "AI" is a maladaptative solution to the problem of not having friends)

  • I see that Silicon Valley has transcended AGI technology and can now execute NP-complete problems.

    A Guy in India Nationals from the Philippines, Completely

    WAYMO exec admits under oath cars in the US have "human operators" based in Philippineshttps://www.youtube.com/watch?v=ClPDbwql34o

  • I feel like I just read someone reviewing Toccata and Fugue in D Minor by complaining that there's no upbeat sections and no overall chorus and the song isn't about anything, that we're just "tossed about on the storms of emotion that by the end we are all seasick to"

  • OT but, though this is mostly about appreciating things in nature rather than navigating a city by car or on foot, this book has helped me a lot with not being anymore a person with a "bad sense of direction", even when walking downtown: The Natural Navigator by Tristan Gooley . I really recommend it for people who hike, even occasionally.

  • I've been deliberately learning to navigate without GPSes and tech devices, as a life skill (also on foot/public transport). I'm terrible at navigating, but I'm realising navigating is kinda like handwriting—in that it's very easy to fall into the trap of saying "I'm terrible at this" as a kind of immutable personality trait, while in fact it's perfectly expected that one is bad at a skill that one never uses, and turns out I can get better at it even with a little bit of deliberate practice. I suck at things but I can improve.

    In the meantime when I use an electronic map to navigate, I still would rather stick a smartphone to the dashboard a car and use whatever navigation app I prefer, than have the screens and navigators built into the car.

  • the last time I drove a car was in 2015 or so, and back then every car I got had no computers in them. I dread the day that I need to have a vehicle again and my friggin car will upload bullshit into the cloud or whatever. the idea of having screens of any kind on a car is repulsive to me

  • Lots of people in IT have been fired and must find a new job.

  • The whole federation loves nolto.social, an open source, federated alternative to linkedin! 5 seconds later We regret to inform you the noto.social is vibe-coded

  • €5 say they'll claim he was talking to jefffrey in an effort to stop the horrors.

    no not the abuse of minors, he was asking epstein for donations to stop AGI, and it's morally ethical to let rich abusers get off scott free if that's the cost of them donating money to charitable causes such as the alignment problem /s

  • I don't mean the term "psychosis" as a depreciative, I mean in the clinical sense of forming a model of the world that deviates from consensus reality, and like, getting really into it.

    For example, the person who posted the Matrix non-code really believed they had implemented the protocol, even though for everyone else it was patently obvious the code wasn't there. That vibe-coded browser didn't even compile, but they also were living in a reality where they made a browser. The German botanics professor thought it was a perfectly normal thing to admit in public that his entire academic output for the past 2 years was autogenerated, including his handling of student data. And it's by now a documented phenomenon how programmers think they're being more productive with LLM assistants, but when you try to measure the productivity, it evaporates.

    These psychoses are, admittely, much milder and less damaging than the Omega Jesus desert UFO suicide case. But they're delusions nonetheless, and moreover they're caused by the same mechanism, viz. the chatbot happily doubling down on everything you say—which means at any moment the "mild" psychoses, too, may end up into a feedback loop that escalates them to dangerous places.

    That is, I'm claiming LLMs have a serious issue with hallucinations, and I'm not talking about the LLM hallucinating.


    Notice that this claim is quite independent of the fact that LLMs have no real understanding or human-like cognition, or that they necessarily produce errors and can't be trusted, or that these errors happen to be, by design, the hardest possible type of error to detect—signal-shaped noise. These problems are bad, sure. But the thing where people hooked on LLMs inflate delusions about what the LLM is even actually doing for them—that seems to me an entirely separate mechanism; something that happens when a person has a syntactically very human-like conversation partner that is a perfect slave, always available, always willing to do whatever you want, always zero pushback, who engages into a crack-cocaine version of brownosing. That's why I compare it to cult dynamics—the kind of group psychosis in a cult isn't a product of the leader's delusions alone, there's a way that the followers vicariously power trip along with their guru and constantly inflate his ego to chase the next hit together.

    It is conceivable to me that someone could make a neutral-toned chatbot programmed to never 100% agree with the user and it wouldn't generate these psychotic effects. Only no company will do that because these things are really expensive to run and they're already bleeding money, they need every trick in the book to get users to stay hooked. But I think nobody in the world had predicted just how badly one can trip when you have "dr. flattery the alwayswrong bot" constantly telling you what a genius you are.

  • Copy-pasting my tentative doomerist theory of generalised "AI" psychosis here:

    I'm getting convinced that in addition to the irreversible pollution of humanity's knowledge commons, and in addition to the massive environmental damage, and the plagiarism/labour issues/concentration of wealth, and other well-discussed problems, there's one insidious damage from LLMs that is still underestimated.

    I will make without argument the following claims:

    Claim 1: Every regular LLM user is undergoing "AI psychosis". Every single one of them, no exceptions.

    The Cloudflare person who blog-posted self-congratulations about their "Matrix implementation" that was mere placeholder comments is one step into a continuum with the people whom the chatbot convinced they're Machine Jesus. The difference is of degree not kind.

    Claim 2: That happens because LLMs have tapped by accident into some poorly understood weakness of human psychology, related to the social and iterative construction of reality.

    Claim 3: This LLM exploit is an algorithmic implementation of the feedback loop between a cult leader and their followers, with the chatbot performing the "follower" role.

    Claim 4: Postindustrial capitalist societies are hyper-individualistic, which makes human beings miserable. LLM chatbots exploit this deliberately by artificially replacing having friends. it is not enough to generate code; they make the bots feel like they talk to you—they pretend a chatbot is someone. This is a predatory business practice that reinforces rather than solves the loneliness epidemic.

    n.b. while the reality-formation exploit is accidental, the imaginary-friend exploit is by design.

    Corollary #1: Every "legitimate" use of an LLM would be better done by having another human being you talk to. (For example, a human coding tutor or trainee dev rather than Claude Code). By "better" it is meant: create more quality, more reliably, with more prosocial costs, while making everybody happier. But LLMs do it: faster at larger quantities with more convenience while atrophying empathy.

    Corollary #2: Capitalism had already created artificial scarcity of friends, so that working communally was artificially hard. LLMs made it much worse, in the same way that an abundance of cheap fast food makes it harder for impoverished folk to reach nutritional self-sufficiency.

    Corollary #3: The combination of claim 4 (we live in individualist loneliness hell) and claim 3 (LLMs are something like a pocket cult follower) will have absolutely devastating sociological effects.

  • OT: today the respiratory illness I've had for five days tested positive for Covid the first time just now.

    My symptoms are fairly mild, probably because I reinforced my vaccine three months ago. But I'm trying to learn more about these recent "swallowing razors" variants and dang! the online situation is bad. Finding reliable medical information in the post-slop, post-Trump Internet is a nightmare.

  • this sounds exactly like the sentence right before "they have played us for absolute fools!" in that meme.

  • I gave the new ChatGPT Health access to 29 million steps and 6 million heartbeat measurements ["a decade of my Apple Watch data"]. It drew questionable conclusions that changed each time I asked.

    WaPo. Paywalled but I like how everything I need to know is already in the blurb above.

  • The interesting thing in this case for me is how did anyone think it was a good idea to draw attention to their placeholder code with a blog post. Like how did they went all the way to vibe a full post without even cursorily glancing at the slop commits.

    I'm convinced by now that at least mild forms of "AI psychosis" affect all chatbots users; after a period of time interacting with what Angela Collier called "Dr. Flattery the Always Wrong Robot", people will hallucinate fully working projects without even trying to test whether it compiles.

  • I mean you don't have to grasp, know of, or care about the consequences when none of the consequences will touch you, and after the bubble pops and the company bankrupts catastrophically, you will remain comfortably a billionaire with several more billions in your aire than the ones you had when you started the bubble in the first place. Consequences are for the working class, capitalists fall upwards.

  • Cloudflare just announced in a blog post that they built:

    a serverless, post-quantum Matrix homeserver.

    it's a vibe-coded pile of slop where most of the functions are placeholders like // TODO: check authorization.

    Full thread: https://tech.lgbt/@JadedBlueEyes/115967791152135761

  • TechTakes @awful.systems

    Wireborn husbands, ELIZA effect, Clippy, empathy (ramble)