Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)T
Posts
23
Comments
2895
Joined
3 yr. ago

  • Everybody starts somewhere. Few come out the gate being Depeche Mode. That doesn't mean it's not worth the struggle to get better.

  • If you want to fight for something, please learn what you stand behind.

    And how do you know I haven't? Do you have insight into my mind?

    Here's my stance: Fuck Google. When have they ever done anything for the benefit of humanity? If this turns out to do exactly what it says on the tin, I'll be happy to eat my words, but pardon me if I don't believe that Google is suddenly interested in clean energy.

  • Great. So we'll waste energy capturing and compressing a useless gas, then we'll just release that into the atmosphere when it's capitalistically convenient? Brilliant. Great work, Google. You've really gone green. /s

  • Human digital interfaces aren't a secret, but other things like remote-viewing, etc. have been known about for a long time, and they were failures. There's even a whole movie about it called Men Who Stare At Goats. Pointing to a few examples of actual conspiracies or weird projects doesn't mean every claim has validity. It just means the government is generally untrustworthy, but that also means you need to take each claim individually, in practice. You can't just generalize and say that "government untrustworthy, therefore believe the opposite of anything they say." That's being reactive, not skeptical.

    That's not to say that there's not scary tech out there (it's been demonstrated that they can not only see but hear conversations through walls by interpolating Wi-Fi signals), but it's all very much within the realm of science, not the paranormal.

  • Oof, that's rough.

  • It's not just the money. It's the knowledge and expertise needed to use the algorithms, at all...Not everyone has the time, energy, and attention to learn that stuff.

    I agree. That does not mean that LLMs are leveling the playing field with people who can't/won't get an education regarding computer science (and let's not forget that most algorithms don't just appear; they're crafted over time). LLMs are easy, but they are not better or even remotely equivalent. It's like saying, "Finally, the masses can tell a robot to build them a table," and saying that the expertise of those "elite" woodworkers is no longer needed.

    ...damn if I am tired of having to rely on "Zillow and a prayer" if I want to get a house or apartment.

    And this isn't a problem LLMs can solve. I feel for you, I do. We're all feeling this shit, but this is a capitalism problem. Until the ultracapitalists who are making these LLMs (OpenAI, Google, Meta, xAI, Anthropic, Palantir, etc.) are no longer the drivers of machine learning, and until the ultracapitalist companies stop using AI or algorithms to decide who gets what prices/loans/rental rates/healthcare/etc., we will not see any kind of level playing field you or the author are wishing for.

    You're looking at AI, ascribing it features and achievements it doesn't deserve, then wishing against all the evidence that it's solving capitalism. It's very much not, and if anything, it's only exacerbating the problems caused by it.

    I applaud your optimism—I was optimistic about it once, too—but it has shown, time and again, that it won't lead to a society not governed by the endless chasing of profits at the expense of everyone else; it won't lead to a society where the billionaires and the rest of us compete on equal footing. What we regular folk have gotten from them will not be their undoing.

    If you want a better society where you don't have to claw the most meager of scraps from the hand of the wealthy, it won't be found here.

  • Okay. Claims are not evidence. "I read it somewhere" is not even close to substantial, because anyone can write anything they want on the Internet. Without evidence or even consensus amongst experts, it just sounds like a conspiracy theory.

    The CIA is often the bogeyman, because they do lots in secret, and the government is inherently untrustworthy. That doesn't mean they have wireless brain interfaces, however.

  • The main maintainer of curl recently encountered a similar thing. Some users had used their own models to find and report hundreds of potential errors (and were open about using those tools when asked). After review, the maintainers incorporated around 40% of the suggested fixes, some being actual breaks and some being semantic QoL fixes. He was surprised that an AI might actually be useful for something like that.

    But in the whole process, there was a human reviewing and checking the work. At no point were these fixes just taken as gospel, and even the reporters were using their own specialized models for this task. I think introducing AI-powered analysis isn't necessarily a bad thing, but relying upon public models and cutting out humans anywhere in the review and application process is a recipe for disaster.

  • No they're not. That's just the claptrap the billionaire Tech Bros want you to believe in. "Ooo, AGI is just around the corner! Buy in now to get it first! Ooo!"

    They just have access to militarized versions through specialized LoRAs and no restraints. It's not anything beyond what's possible for regular people right now, it's just that regular people will never get access to the kind of training data needed to achieve the same results (not that the government should be able to, either).

  • Then along comes the language model. Suddenly, you just talk to the computer the way you'd talk to another human, and you get what you ask for.

    That's not at all how LLMs work, and that's why people are saying this whole premise is a bad take. Not only do LLMs get things wrong, they do it in such a way that it completely fabricates answers at times; they do this, because they're pattern generation engines, not database parsers. Algorithms don't do that, because they digest a set of information and return a subset of that information.

    Also, so what if algorithms cost a lot of money? That's not really an argument for why LLMs level the playing field. They're not analogous to each other, and the LLMs being foisted on the unassuming public by the billionaires are certainly not some kind of power leveler.

    Furthermore, it takes a fuckton more processing resources to run an LLM than it does an algorithm, and I'm just talking about cycles. If we went beyond just cycles, the relative power needed to solve the same problem using an LLM versus an algorithm is not even close. There's an entire branch of mathematics dedicated to algorithm analysis and optimization, but you'll find no such thing for LLMs, because they're not remotely the same.

    No, all we have are fancy chatbots at the end of the day that hallucinate basic facts, not especially different from the annoying Virtual Assistants of a few years ago.

  • It very much depends on your local laws. Despite the current administration, the law in the US, for example, is that you do not have to divulge passwords (a Fifth Amendment right to silence). You can hand over your entire encrypted database intact, no destruction needed, and unless the authorities can decrypt it, it's useless evidence in court. Prosecutors may still try to build a case without that evidence (as you pointed out by getting decrypted correspondence with an accomplice), but it's not illegal to hand over encrypted data, even if they demand that you decrypt it; you are under no legal obligation to help incriminate yourself.

    That right may not exist in other countries, so as always, one should know their individual rights and threat model.

  • Right. The point is that they're not going to do you any favors with regard to the law. They have zero incentive to fight the law on your behalf, because your relationship is purely transactional.

    Another way to say it is, "No company is going to break the law for you."

  • For all questions: your own.

    Every company has to comply with the laws of the country in which they operate, and no company is going to go to jail for you. There's other encrypted email providers, but they will still have to abide by their local laws. The best you can hope for is that they have minimal data on you and that anything potentially incriminating is encrypted and can only be decrypted by you.

  • Something to do over Christmas!

  • Bruh, he's right there.

  • I'll add that if you want a more bleeding edge experience, similar to CachyOS, but still within the Debian world, PikaOS is the Debian-based (not Ubuntu-based) analog of CachyOS.

    Edit: clarification

  • Google and other megacorps with AI slopbots: AI bots should be free to slurp up as much data as they want. It doesn't break copyright!

    Also those companies: Wait, AI isn't allowed to steal from us!

  • Evolution and genetics were distorted to prop up racism, chattel slavery, and colonialist missionaries, yet we don't dismiss them as pseudoscience.

    True, and in fact, bigotry was kind of a part of the origin story, not just a distortion. As it turns out, though, they're actually useful ways to describe contemporary biology. They started as a means for racism and bigotry, but they are not that any longer (excepting where bigots try to revive the racist elements every so often).

    This is because the domination of nature and our ecology by humanity has its ultimate roots in the domination humanity by humans. Therefore, the solutions to our ecological problems are found by addressing our social and ecological problems simultaneously.

    This is not a valid syllogism. The premises are not necessarily interrelated, but the authors are trying to appeal to our intuition by saying that we dominate the one and we dominate the other, therefore related. No, it does not follow that humans dominating each other is the root cause of dominating nature, and the final conclusion is therefore rendered invalid.

    are you calling me the dimwit, the article's author, or the cultures from which the concept of wetiko comes from?

    The article's authors. Apologies if you felt personally attacked.

    If you have specific references for "the facts of reality [that] are sufficient" I'm all ears.

    Happy to oblige! This is a good jumping off point, and as you'll notice, there's no need for "othering" in this system. As far as I understand it, it's actually still a work in progress (i.e. it's still a growing movement in Kurdistan), but it looks to be functional and scalable.

    https://en.wikipedia.org/wiki/Democratic_confederalism

    But with regards to the article, the biggest issue I have is that it's founded upon the "othering" of people who disagree. "It's not our fault or something that we should fix together. It's their fault, and we should try to eradicate their disease." That's not to say that we have to "tolerate their intolerance," but they aren't diseased for having bad paradigms any more than someone is diseased for liking pineapple on pizza or believing in a different god. Ideas aren't diseases.

    If we hope to have a socially and ecologically responsible society, it can't be founded upon othering, because that's the very division the authors are supposedly trying to reject.