Skip Navigation

Posts
4
Comments
98
Joined
3 yr. ago

  • oxy is a mastodon user who follows David so they’re most likely telling the AI startups in question to GTFO and lemmy has kindly reinterpreted that as a tag

    however,

    Why are people hating on / blocking dgerard?

    let’s go down the list! depending on the subculture, you’re blocking dgerard because:

    • you’re a Scientologist and interacting with dgerard will cost you your Operating Thetan rank
    • you’re a Rationalist and have read the Lightcone Infrastructure memo about how dgerard is the devil, beating out all other contenders for most evil being on the earth
    • you’re a cryptocurrency gambler and you think dgerard’s critiques lost you money
    • you run an AI startup and you think dgerard’s critiques lost you money
    • you’re on the far right and dgerard won’t let you edit misinformation straight into Wikipedia

    there are more, we can go deeper

  • here’s the summary table that the article pulled its numbers from

    here’s a specific question regarding AI

  • it’s really rude to market a game as a language learning app

  • me telling you to go fuck yourself makes me exactly as unfair and mean as Palantir, a Peter Thiel company specializing in genocide and mass surveillance. yes hmm I see

    you and your friend can both fuck off with the type of truth and transparency where you claim to not be defending fucking Palantir of all things while uncritically parroting their words. nobody fucking needs that in any context. you don’t in fact have to hand it to the fascists on this or any other point.

  • I’m not defending palantir but no need to invent reasons to be mad at them.

    the fuck is wrong with you

  • if you should ever happen to be short on resumes…

    (it feels like a zero AI job board might be a good thing to have, but we’d need a way to vet submissions and handle anonymous submissions and inquiries so people don’t dox themselves)

  • In March 2025, the large language model (LLM) GPT-4.5, developed by OpenAI in San Francisco, California, was judged by humans in a Turing test to be human 73% of the time — more often than actual humans were. Moreover, readers even preferred literary texts generated by LLMs over those written by human experts.

    do you know how hard it is to write something that aged poorly months before it was written? it’s in the public consciousness that LLMs write like absolute shit in ways that are very easy to pick out once you’ve been forced to read a bunch of LLM-extruded text. inb4 some asshole with AI psychosis pulls out “technically ChatGPT’s more human than you are, look at the statistics” regarding the 73% figure I guess. but you know when statistics don’t count!

    A March 2025 survey by the Association for the Advancement of Artificial Intelligence in Washington DC found that 76% of leading researchers thought that scaling up current AI approaches would be ‘unlikely’ or ‘very unlikely’ to yield AGI

    […] What explains this disconnect? We suggest that the problem is part conceptual, because definitions of AGI are ambiguous and inconsistent; part emotional, because AGI raises fear of displacement and disruption; and part practical, as the term is entangled with commercial interests that can distort assessments.

    no you see it’s the leading researchers that are wrong. why are you being so emotional over AGI. we surveyed Some Assholes and they were pretty sure GPT was a human and you were a bot so… so there!

  • uhm @self can you show me where I wrote this? can you show me where I wrote these exact words? no? that’s so irrational of you.

  • I’ll have you know we’re acting like someone who’s rude to “serving staff” right now, where serving staff is defined as a formerly chrome currently mozilla developer relations marketing guy with fucking flatlined vibes

  • The original argument was about worries over Kessler syndrome and then they moved the goal post to space junk not burning up completely, utilizing an article about space junk that wasn’t ever in orbit.

    no, the "original argument" (jacking off motions) was prompted by this shitpost of yours:

    I mean, what’s the worst that could happen?

    nobody moved the fucking goal posts, normal people don't go online to engage in a spot of spacex defense while pretending to hate first-name basis "Elon". that you have a rehearsed set of fucking gotchas over the specific danger presented by spacex is fucking incredible.

    You could make the argument that putting more junk into space has negative and unnecessary outcomes, but that’s a completely different argument that I would agree with.

    but instead you decided to tediously split hairs over Kessler Syndrome as if anyone here other than the resident physicists give a shit. you don't get it. nobody is here to win points. we're not an IRB; nobody gives a shit what specific category of danger is represented by the space junk Musk's generating. we give a shit that there's danger at all.

    fuck me there's nothing more depressing than a spacex fan who swears they hate musk and goes to bat for the fucker's worst, most damaging excesses

  • every time I've used it, it had massive issues with the connection hitching and with delivering anywhere near the promised amount of bandwidth

    my experience is my own, etc etc, but it reminded me a lot of how every time I've been in a self-driving tesla the only person impressed (and not terrified) of the thing was the owner

  • no space bois thx

  • anyone else who wants to tell me “for the millionth time” about how it’s super safe to fill low-earth orbit with unprecedented amounts of literal garbage in pursuit of creating a shit-tier ISP that’s sucked hard every time I’ve used it is welcome to take it up with the professor of astronomy that wrote that last article

    “it’s not Kessler syndrome unless it’s from the Kessler region of space, otherwise it’s just sparkling Rods from God” fuck you

  • uh huh. spacex fans are fucking wild

  • Kessler syndrome, and historically starlink’s satellites don’t always burn up in the atmosphere as they should

  • we don’t want your special boi open source plagiarism machine either

  • TechTakes @awful.systems

    Stubsack: weekly thread for sneers not worth an entire post, week ending 16th November 2025

    awful.systems /post/6153095
  • TechTakes @awful.systems

    now that's what I call fashtech, vol. 1

    mas.to /@zzt/115272477801664683
  • TechTakes @awful.systems

    Urbit 2.0 just dropped

    archive.ph /e4jIl