• 6 Posts
  • 1.03K Comments
Joined 2 年前
cake
Cake day: 2023年7月14日

help-circle

  • I don’t believe the common refrain that AI is only a problem because of capitalism. People already disinform, make mistakes, take irresponsible shortcuts, and spam even when there is no monetary incentive to do so.

    I also don’t believe that AI is “just a tool”, fundamentally neutral and void of any political predisposition. This has been discussed at length academically. But it’s also something we know well in our idiom: “When you have a hammer, everything looks like a nail.” When you have AI, genuine communication looks like raw material. And the ability to place generated output alongside the original… looks like a goal.

    Culture — the ability to have a very long-term ongoing conversation that continues across many generations, about how we ought to live — is by far the defining feature of our species. It’s not only the source of our abilities, but also the source of our morality.

    Despite a very long series of authors warning us, we have allowed a pocket of our society to adopt the belief that ability is morality. “The fact that we can, means we should.”

    We’re witnessing the early stages of the information equivalent of Kessler Syndrome. It’s not that some bad actors who were always present will be using a new tool. It’s that any public conversation broad enough to be culturally significant will be so full of AI debris that it will be almost impossible for humans to find each other.

    The worst part is that this will be (or is) largely invisible. We won’t know that we’re wasting hours of our lives reading and replying to bots, tugging on a steering wheel, trying to guide humanity’s future, not realizing the autopilot is discarding our inputs. It’s not a dead internet that worries me, but an undead internet. A shambling corpse that moves in vain, unaware of its own demise.



  • Basically:

    Intel, AMD, and Microsoft are all going down a dead-end road called x86_64, especially on portable devices.

    Apple and Google took a turn ages ago, towards an alternative called aarch64. Originally just for phones, but now for everything.

    VR headsets, Raspberry Pis, IoT devices, etc. also tend to run aarch or aarch64.

    Microsoft has been trying to follow suit, but it hasn’t gone well so far. Windows for ARM (the aarch64 version of Windows) is supremely unpopular, for a lot of (mostly good) reasons.

    So people avoid the devices or ditch them because none of their apps run natively. But Microsoft basically has no choice but to keep pushing.

    So the end result is, Microsoft is subsidizing tons of excellent hardware that will never be used for Windows cuz it’s just not ready yet.

    But Linux is!

    Edit:

    Funny thing is, ARM (company behind aarch64) keeps shooting themselves in the foot, to the point where lots of companies are hedging their bets with a dark horse called RISC-V that never had a snowball’s chance in Hell before, but now could possibly win.

    And if Microsoft still hasn’t built a new home on aarch64 by the time that happens, they may accidentally be in the best position to capitalize on it.








    1. Fuck AI
    2. This judge’s point is absolutely true:

    “You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products,” Chhabria said. “You are dramatically changing, you might even say obliterating, the market for that person’s work, and you’re saying that you don’t even have to pay a license to that person.”

    1. AI apologists’ response to that will invariably be “but it’s sampling from millions of people at once, not just that one person”, which always sounds like the fractions-of-a-penny scene
    2. Fuck copyright
    3. A ruling against fair use for AI will almost certainly deal collateral damage to perfectly innocuous scraping projects like linguistic analysis. Even despite their acknowledgement of the issue:

    To prevent both harms, the Copyright Office expects that some AI training will be deemed fair use, such as training viewed as transformative, because resulting models don’t compete with creative works. Those uses threaten no market harm but rather solve a societal need, such as language models translating texts, moderating content, or correcting grammar. Or in the case of audio models, technology that helps producers clean up unwanted distortion might be fair use, where models that generate songs in the style of popular artists might not, the office opined.

    1. We really need to regulate against AI — right now — but doing it through copyright might be worse than not doing it at all

  • No but you don’t understand.

    Capitalism works because it pits everyone against each other and so even though every single person is greedy and unethical, they begrudgingly improve society overall because of reasons.

    All we have to do is make sure we teach every child that all humans are fundamentally greedy and evil and the only ethical response is to out-greedy and out-evil them.

    And then we’ll have a prosperous society!

    • Adam Smith basically