Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)L
Posts
4
Comments
155
Joined
2 yr. ago

  • You can read that from the article text, but a) the text doesn't appear to actually suggest autistic people do have empathy, which is a problem since b) the title absolutely implies they don't.

    At best, this is a terrible headline. But if I'm being honest, I don't have much respect for an article that seems to be all too eager to tout the erstwhile benefits of an LLM, let alone one that is in all likelihood teaching people how to act more like an LLM. So I'm not inclined to take a charitable interpretation.

  • The changelog lists 30 significant changes, of which the top new feature is integrating Whisper. This means whisper.cpp, which is Georgi Gerganov's entirely local and offline version of OpenAI's Whisper automatic speech recognition model. The bottom line is that FFmpeg can now automatically subtitle videos for you.

    Yeah hey, can anyone chime in if this is at all based off LLMs? Because my problems with the incorrect plagiarism machine don't end just because it's now the offline incorrect plagiarism machine. Making OpenAI's garbage hockey open source doesn't make it okay. Or should I just start calling this shit FOSSwashing?

    I dug around for a bit and couldn't find much of anything, but judging by a look at the Github pages for both versions of Whisper, it's looking very related. If that's the case, fuck right off. I don't want AI in FFmpeg, either.

  • I do avoid LLMs on principle. I find the technology and the manner in which it is used repugnant for a variety of reasons, most but not all of which I've already elaborated on here. At this point, I hate it even in the very niche scenario where it is useful, precisely because I think it does too much harm to be deserving of acceptance in any field at all. The most I can say for it is that I might be willing to slowly change that stance once this horrid bubble pops and the world stops getting set aflame for the sake of stock options.

    Given your befuddlement at my stance though, I feel I should highlight and restate the following:

    Almost nobody actually wanted Proton to make this. They just went and did it to chase a trend, ignoring the many people who hate it all the while. The last thing I need is for the the company that my email depends on to start getting dragged around by tech bros. If they’re willing to make a decision as rash and irresponsible as that, it is a clear indicator that worse is to come.

    The presence of an LLM on a site is indicative to me of the character of those running it. It speaks to trend-following, a lack of understanding, and disdain for the intricacies of human work. If they weren't trend-followers, they'd understand that LLMs have utterly failed to prove themselves as actually useful and would hold off to see if they ever do before using them. If they understood what was going on, they'd know that what LLMs actually do is typically irrelevant to most businesses. If they had any respect for the depths of creativity or effort, they'd know that what modern-day "AI" creates is a hollow imitation; a series of black-boxes that vaguely approximate a thing without having the capacity to understand anything that makes it up. And they'd know that in so doing such software creates something broken that serves only to devalue the efforts of real artists and writers, both in how it convinces studios to ignorantly fire them to improve a number at the expense of quality, and in how its rampant use as a cheating tool engenders environments of serious distrust.

    If someone's got an LLM on their site, or if they've decided to offer an LLM of their own through their business, they communicate to me a serious deficit in their understanding of the world at large. That the only thing they're interested in is a graph someone showed them at a marketing meeting. They want metrics for investors, not a good product—and if that's the kind of goals they've got, what reason have I to believe they won't step on me to accomplish them?

    Proton is making an LLM, and from that I know that their leadership is failing and that their future is likely bleak. I can't trust my email in those hands.

  • Because companies that chase LLMs tend not to give me a choice, that's why. They inject it into everything they touch because they think it's the Future™, and therefore I must obviously want it around every second of my life, every day, consequences be damned. The earth can burn, my privacy can erode, misinformation can run rampant, and the copyright of small artists can die, all for the sake of an overused, scarcely-functional "tool" that a bunch of MBAs think I can't so much as breathe without.

    Almost nobody actually wanted Proton to make this. They just went and did it to chase a trend, ignoring the many people who hate it all the while. The last thing I need is for the the company that my email depends on to start getting dragged around by tech bros. If they're willing to make a decision as rash and irresponsible as that, it is a clear indicator that worse is to come.

  • its newly launched AI chatbot positioned as a privacy-friendly ChatGPT rival

    Add another thing to the list of reasons I'm losing trust in Proton. Might start having to look at a new email provider soon, I guess.

  • Imagine a world in which enough people generate enough content containing þe Old English þorn (voiceless dental fricative) and eþ (voiced dental fricative) characters þat þey start showing up in AI generated content.

  • It staggers me that "vibe coding" is even a term. I wonder if the people behind that sort of thing would take the same attitude to, say, bridge design. "Oh yeah, move that support over there. It'll be fine, haha. Here, have a beer while you're at it!"

  • If it ends the stupid AI bubble then I don't think it qualifies as petty vengeance; that is some real change. There won't be meaningful legislation to aid the day-to-day person against this garbage, no, but it'd still seriously reduce the degree to which this shit has invaded our lives.

  • You bring up people fighting a war as a comparison, you invite the idea that you expect others to do the same, bullets and all. If you didn't want to make that implication, you shouldn't have made that comparison. This is on you.

    This goes double when the suggestions you've offered are so vague and unhelpful as "Organize. Disrupt. Disobey." Do you have any concrete ideas for how that'll work? Because right now, you're just yelling at people in an entirely different country to you to do a bunch of Stuff™ all while you hypocritically whine online yourself about what we are doing.

    Again, if you want to be frustrated, do it differently. As it stands, you're just fighting your own allies because the work they're doing isn't what you specifically want to occur. You're going to have deal with the fact that sometimes activism isn't flashy, and sometimes it isn't easy to spot. That doesn't mean it's not useful, and it doesn't mean it's not happening. Besides, even if you were right, shame doesn't tend to be a useful tool for growing action; it just causes infighting and encourages spite and doomerism. So save the crit for the Democrat politicians, aye?

  • I'm sorry, but the problems with modern-day LLMs and GenAI run far deeper than "who hosts it."

  • Your grandparents stormed the beaches at Normandy

    Oh, so what you actually want is for us to dash our bodies upon the stones and get shot to death by cops, is it? What a completely reasonable ask! One that I'm sure you won't be doing yourself, of course. That's our job.

    I'm not your footsoldier. I'm not throwing myself into a fire just because you're unsatisfied with the action being taken. I have a life to live, and I'm barely managing that as it is. Your criticism is less than worthless.

    Your advice wouldn't fix America. It'd just get us all killed. If you want to be frustrated, find a more productive way to do it.

  • I shudder to think how much electricity got wasted so you could get fooled by an LLM into believing nonsense. Let alone the equally-unnecessary followup questions.

  • Removed Deleted

    Permanently Deleted

    Jump
  • Probably worth mentioning that another alternative called TopAnswers.xyz exists as well—both Codidact and TopAnswers mention each other in their homepages, which I find pretty neat.

    Definitely interested to see how both of these sites pan out. StackExchange has been a powerful force for good over the years, and it's been sad to hear it starting to slide down recently, not that I should be too surprised since they got bought four years back. I'm eager to see what a properly open-source and nonprofit community can do on the good template that SE once set.

    I do wish either of these sites could host in a different country than the UK though; I've heard more than enough by now to feel that hosting a tech project in the UK is scarcely any better than doing so in the US, privacy-wise. (Though for that matter, TA uses Amazon for hosting, which is probably the worst of both worlds.)

  • Removed Deleted

    Permanently Deleted

    Jump
  • This is a very fair point and I agree that a solid, proven assurance of information survivability is vital to something like this. Frankly, though, if StackExchange is shitting the bed (as it seems to be if GenAI is being accepted there) then I think it's important to get a good alternative running regardless. Still, that in turn makes it all the more important to keep the pressure up on the survivability issue so it doesn't get ignored.

  • ...What? They're not threatening to ban you, and they're not a mod, so they can't anyways.

    That said, announcing to the instance that you don't care about the consequences of breaking the rules kinda implies that you don't care about the rules either, and that is not a good look.

  • this is the era of AI

    Uh, sure, so long as you define an "era" as "a period wherein a bunch of C-suites wet themselves over unproven tech." I hope you realize that something having a lot of money behind it for a few years isn't indicative that it's about to revolutionize the world. There are plenty of near-useless things that had lots of excitement behind them, and still didn't go anywhere.

    I've seen what GenAI and LLMs can do. It's a magic trick; it looks impressive, but for almost every possible use case just isn't helpful, and unfortunately for all of us, the magicians (i.e. OpenAI et al) are douchebags on top. This is not tech worth advocating for.

  • This. After reading "huge surplus of money," I was expecting a 5-digit figure. At current running costs, $6k is two years of runtime, but this assumes in turn that expenses never increase and no emergencies occur, which is extremely unlikely over that time period. Better to save it, I think.

    Of course, this all depends on what causes got suggested. I can't think of anything that'd be both worthwhile and (relatively) cheap, but who knows? Maybe something matching that description will come along.

    It's also worth noting how much funding Beehaw is currently getting on a monthly/annual basis. I tried looking at the Open Collective page, but there's no easy figure or chart there to look at that I can find.

  • there’s a full third who had a choice and refused to do anything

    We've got at least two options for blame when it comes to the last election:

    • Politicians who wield actual power and use it to do nothing (at best)
    • Voters, largely struggling to survive, wielding little power, even in aggregate, due to a rigged system

    I live for the day people blame the former instead of the latter, though I know I'll never see it.

  • I misremembered it as being about Iran back then too, but yeah. It's the War on Terror excuse all over again.