Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 months ago

    I suspect we all knew it already, but Bruno Dias offers some receipts: the bluesky crackdown on people suggesting that charlie kirk should rest in piss came several days before the government leaned on social media firms.

    September 12th: bluesky mourns kirk: https://aftermath.site/bluesky-charlie-kirk-dead-rest-in-piss

    September 15th: whitehouse nastygram : https://bsky.app/profile/chipnick.com/post/3m2k6va63222m

    (also, bitter lol at “gentlemen”, because running a tech company is a mans job, don’t you know)

    They complied well in advance, because it’s what they wanted to do anyway.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 months ago

      “I’d already been ChatGPT-ed into bed at least once. I didn’t want it to happen again.”

      According to a 2024 YouGov poll, for instance, around half of Americans aged 18-34 reported having been, like Holly, in a situationship (a term it defines as “a romantic connection that exists in a gray area, neither strictly platonic nor officially a committed relationship”).

      “Over the course of a week, I realised I was relying on it quite a lot,” she says. “And I was like, you know what, that’s fine – why not outsource my love life to ChatGPT?”

      She describes being on the receiving end of the kinds of techniques that Jamil uses – being drilled with questions, “like you’re answering an HR questionnaire”, then off the back of those answers “having conversations where it feels as if the other person has a tap on my phone because everything they say is so perfectly suited to me”.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    So for my gaming needs I check reddit every now and then, and on phone it had after the comments ended a related answers section, which gave related answers 99% of them in the same sub.

    Now they out some ai generated shit between that and the answers are just horrible generic slop.

    Check out this answer for example: https://www.reddit.com/answers/3c67990a-d1a2-4f86-b1e4-c2f3bb54803d/

    Very important context here. I was looking at the starsector subreddit. (A 2d arcade like space shooter) This is about a minecraft like building game. (Most of the advice is also useless (how to survive: ‘use mods!’).

  • lagrangeinterpolator@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 months ago

    Lately I’ve been mildly annoyed when I just want to relax and watch gaming videos on Youtube and I see recommendations for some AI critihype. Out of morbid curiosity, I decided to click on one of them and of course the “original paper” the video is based on is the stupid Anthropic blog post about how the AI blackmailed someone (after it was told to blackmail someone). I was even more annoyed to find out how popular it is, but at least it shows how the general public has such a negative opinion of AI. Some of the comments are thankfully pushing back against the video and focusing on the real harms.

    I thought that by now we would have learned from the tobacco companies to never trust “research” done by a company about their own products.

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    “In the 21st century, the Antichrist is a Luddite who wants to stop all science. It’s someone like Greta or Eliezer,” [Peter Thiel] said, referring to Thunberg and Eliezer Yudkowsky

    I am going outside to smoke something. I don’t care what, just… something

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      In the 21st century, the Antichrist is a Luddite who wants to stop all science.

      As opposed to the current administration that is destroying science by cutting the NSF’s funding. An administration that Peter Thiel supports. He might want to look into that.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      Damn, when you get kicked off the Thiel grifter pipeline, you get straight punted. Not even the callous disregard and abandonment practiced by a certain outer-borough real estate hustler, instead it’s reverse apotheosis

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    as one of the two non-computer scientists here, every time I check in there seems to be some load bearing open source project I’ve never heard of that’s gone fash. “GreenBlox is refusing to kick out a contributor who said the jewish question should be on the table??” “PipeLinux official account is posting that pronouns don’t exist?” open source people, are you ok?

    • aio@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      CS has a huge number of people who think you can derive the solutions to social problems from first principles. It’s impossible to reason with them.

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      For the most part, the sudden rash of people deciding that their bigotry is now publicly acceptable is mostly around non-loadbearing things, because aggrieved entitled nerds aren’t great at working in a team (hyprland has definitely suffered from alienating people and missing out on fixes and compatibility work).

      The rubygems stuff was a special case, because it was a hostile takeover of some important infrastructure by a shitty company, but most of the rest are unexciting projects that have found that giving it the old H-H is good publicity and more importantly: there are some rich folk throwing money around. Not a lot of cash in open source under normal circumstances.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    An investor runs the numbers of AI capex and is not impressed

    (n.b. I have no idea who this guy is or his track record (or even if he’s a dude) but I think the numbers check out and the parallells to railroads in the 19th century are interesting too)

    Global Crossing Is Reborn…

    Now, I think AI grows. I think the use-cases grow. I think the revenue grows. I think they eventually charge more for products that I didn’t even know could exist. However, $480 billion is a LOT of revenue for guys like me who don’t even pay a monthly fee today for the product. To put this into perspective, Netflix had $39 billion in revenue in 2024 on roughly 300 million subscribers, or less than 10% of the required revenue, yet having rather fully tapped out the TAM of users who will pay a subscription for a product like this. Microsoft Office 365 got to $ 95 billion in commercial and consumer spending in 2024, and then even Microsoft ran out of people to sell the product to. $480 billion is just an astronomical number.

    Of course, corporations will adopt AI as they see productivity improvements. Governments have unlimited capital—they love overpaying for stuff. Maybe you can ultimately jam $480 billion of this stuff down their throats. The problem is that $480 billion in revenue isn’t for all of the world’s future AI needs, it’s the revenue simply needed to cover the 2025 capex spend. What if they spend twice as much in 2026?? What if you need almost $1 trillion in revenue to cover the 2026 vintage of spend?? At some point, you outrun even the government’s capacity to waste money (shocking!!)

    An AI Addendum

    As a result, my blog post seems to have elicited a liberating realization that they weren’t alone in questioning the math—they’ve just been too shy to share their findings with their peers in the industry. I’ve elicited a gnosis, if you will. As this unveiling cascaded, and they forwarded my writings to their friends, an industry simultaneously nodded along. Personal self-doubts disappeared, and high-placed individuals reached out to share their epiphanies. “None of this makes sense!!” “We’ll never earn a return on capital!!” “We’ve been wondering the same thing as you!!”

    […]

    Remember, the industry is spending over $30 billion a month (approximately $400 billion for 2025) and only receiving a bit more than a billion a month back in revenue. The mismatch is astonishing, and this ignores that in 2026, hundreds of billions of additional datacenters will get built, all needing additional revenue to justify their existence. Adding the two years together, and using the math from my prior post, you’d need approximately $1 trillion in revenue to hit break even, and many trillions more to earn an acceptable return on this spend. Remember again, that revenue is currently running at around $15 to $20 billion today.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      7 months ago

      If you called me a boomer in my mentality, I wouldn’t really disagree. I still believe that things like cash flow and return on capital matter.

      Guess im part boomer as well. (Holy shit we are so fucked if this is a “boomer” thought in the stock market)

      E: I had hoped this part was a bit and he would reflect more on it later.

      I am not here to belittle AI, it’s the future, and I recognize that we’re just scratching the surface in terms of what it can do.

      But turns out it wasn’t. What if this is it? (And im talking about AI as it exists now not some magical other tech from the future), the gpt 5 release was meh, we reached the end of the S-curve (or hit our (local) maximum, if non-S curve curves are more your thing). He even admits the tech doesn’t work that well in his own article.

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Is there a general term for the type of experimental or vaporware tech whose main function is creating FUD and FOMO which slows down the adoption and development of more mature conventional solutions? In the case of public transit these are collectively known as gadgetbahns. Examples from other fields include SMRs, direct air carbon capture, various embrace-extend-extinguish schemes in the software world, extraterrestrial colonies and a host of consumer IoT gadgets.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      If there isn’t a term, maybe you get to invent one! Just exploring the concept a bit here to try to generate leads, in case you wanted them.

      To rephrase your concept, you have A) things that are collective attention thieves/time sinks for a particular field or industry, and B) this vaporware appears to have a good profit-to-opportunity cost ratio, but in reality, it does not.

      You could focus on just A), with a direct naming of “collective attention thief”. You can substitute “collective” with “industry” and “attention thief” with “time sink”, etc. Or something like “kleptoware” or “sinkware”, “holetech”, etc.

      Focusing on just B), you might come up with something like “bubbleware”, “bubble” indicating that the vaporware has inflated value.

      Combining the two, you might name it after a scam. Maybe “pigeonware” after the pigeon drop scam, or “fawneyware” or “fiddleware” etc., there are many scams you could use.

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        I’d describe it as parasitic disruption. The scam analogies are on point and fine for rhetorical purposes, but they imply a degree of intentionality which is not necessary for some tech to be parasitic.

        Say you invent a new type of electical power line that’s more durable and power efficient than the existing type. The materials are also ten times more expensive than for the same length of normal power line and the only factory making this type of power line can only make enough to fill the needs of a few small customers with special needs. Meanwhile local government in Eriador is planning the electrification of the Shire community when the well-meaning councilor Brandybuck mentions this new type of power line he read about in a magazine. Perhaps the council should wait and see how that develops before committing to building power lines that might be obsolete the moment they’re put up.

        Neither you nor the councilor are deliberately using your invention as a tool to stall electrification of the Shire, but the same effect happens anyway.

        You point about property B is a pretty good one. My hunch is that tech follies like these are related to economic bubbles and share similarities with them. I’ll postulate that most parasitic disruptions go hand in hand with economic bubbles, but not necessarily all of them.

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Another, a little more snide name I came up with while writing that: “free drinks tomorrow” tech, after a popular sign seen on the walls of bars around the world.

  • Mii@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    After kinda fence-sitting on the topic of AI in general for while, Hank Green is having a mental breakdown on YouTube over Sora2 and it’s honestly pretty funny.

    If you’re the kind of motherfucker who will create SlopTok, you are not the kind of motherfucker who should be in charge of OpenAI.

    Not that anyone should be in charge of that shitshow of a company, but hey!

    Bonus sneer from the comment section:

    Sam Altman in Feb 2015: “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”

    Sam Altman in Dec 2015, after co-founding OpenAI: “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

    Sam Altman 4 days ago, on his personal blog: “we are going to have to somehow make money for video generation.”

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      After kinda fence-sitting on the topic of AI in general for while, Hank Green is having a mental breakdown on YouTube over Sora2 and it’s honestly pretty funny.

      I don’t see much to laugh at here myself. Hank may have been a massive fencesitter on AI, but I still think his reaction to Sora’s completely goddamn justified. This shit is going to enable scams, misinformation and propaganda on a Biblical fucking scale, and undermine the credibility of video evidence for good measure.

      Got another bonus sneer from the comments as well:

      Polluting human knowledge with crap, making internet useless, taking away jobs from creative people by making things that look creative enough. Governments are complicit, politicians are bribed. Like that suck-up youtuber [Two Minute Papers] repeats, “What a time to be alive” right ?

      (Sidenote: It massively fucking sucks how Two Minute Papers drank the AI Kool-Aid, I used to love that channel.)

      • Mii@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        I don’t see much to laugh at here myself. Hank may have been a massive fencesitter on AI, but I still think his reaction to Sora’s completely goddamn justified. This shit is going to enable scams, misinformation and propaganda on a Biblical fucking scale, and undermine the credibility of video evidence for good measure.

        No, it’s absolutely justified and I agree with basically everything he says in the video (esp. the title, there is really no reason for technology like this to exist in the hand of the public, or anyone really, there’s zero upsides to it). It’s just funny to me because the video is just so different from his usual calm stuff.

        But honestly, good for him and (hopefully) his community too.

  • antifuchs@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Oh my god, The Guardian with the sneer:

    Take a look at Sam Altman. I mean, actually do it. Go to Google images, where you can find countless photos of the OpenAI boss smiling in a kind of wan genius way, the humble lost puppy of Silicon Valley. But I urge you to simply cover the bottom half of his face in any of these pictures, and you will immediately clock that Sam has the sad-psycho eyes of the lost woman’s boyfriend who the police have asked to front the missing person’s appeal. Please come home, Sheila – we’re all worried sick and we just want you back.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Stop this bigotry, nothing wrong with people trying to follow into the perfection of The Four Armed Rmperor. Muties are servants of His Holyness as well!

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I could never live in a dyson sphere. I cant stand how they stop charging after six months.

  • sinedpick@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I just wanted to lob a sneer at this article fawning over Sora 2: https://spyglass.org/soras-slop-hits-different/

    And again, a lot of this stuff — slop or not — is funny. Really, truly funny. Sora is scaling comedy in a way that we’ve never seen.

    did this motherfucker just try to say “scaling comedy”? If you ever wondered why techbros are so unfunny, here’s something to point at.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    Bluesky going to bad for that poor, downtrodden, victimised and underrepresented demographic, uh, ai slop posters?

    https://bsky.app/profile/carrion.bsky.social/post/3m2kf3rottc2h

    alt text

    A screenshot of an email sent to a bluesky user, reading

    Hi there, Your Bluesky account (@carrion.bsky.social) has created a list called “Al Slop Posters” that may violate our Community Guidelines. We’ve temporarily hidden this list from other users because it contains one or more of these issues.

    • Harmful language such as insults or slurs
    • Unverified claims
    • Appears intended to shame or abuse users