Hello everyone,

I’ve been wondering, why has no one built an entirely free (as in freedom) computer yet? For humans to be unable to share each other’s knowledge to build one of the most important technologies ever created for society, how is it that we have yet to have full knowledge about how our systems operate?

I get that companies are basically the ones to blame, and I know there are alternatives like the Talos II by Raptor Computing, but still, how do we not have publicly available full schematics for just one modern computer? I’m talking down to firmware-level stuff like proprietary ECs, microcode, hard drive/SSD firmware, network controllers, etc. How do we not have a fully open system yet?

  • OneCardboardBox@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    I think because such an undertaking would require a wide breadth of extremely specialized knowledge. It would require intense coordination of many experts to work together over many years, all to design something that:

    1. Will eventually be obsolete within a few years
    2. Is outside the realm of replicability for individuals (I never heard of anyone with a nanometer-scale photolithography room in their house)

    Item 1 is OK for hobbyists, who might value open source over new-ness, but item 2 all but guarantees that only big corporations can actually get involved. They don’t care about free and open source. They just want a computing platform that their engineers can develop a product for. As long as there’s enough documentation for their goals, open source is irrelevant.

    The power of modern computing comes partly in how it enables abstraction. You don’t need to understand the physics of electrons through a transistor to write a video game. Overall, the open source community has generally converged on the idea that abstracting away the really hard stuff is an acceptable tradeoff.

    • eltimablo@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      I actually disagree with point 1 to an extent. The startup work for such a machine would indeed require a lot of effort, but once that groundwork is in place, wouldn’t that make it easier to maintain momentum and release a successor?

      • OneCardboardBox@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        I guess it would depend on whether or not the project spawns a dedicated community that lasts for a long time. Without a wide pool of knowledgeable contributors, I think it would be hard for an original team to both support the one design while also developing the next iteration.

        Not to bring it up as a whipping boy, but let’s take the case of Wayland, which is “just” a software protocol. It was started back in 2008, and is still under active development. As more projects support it, more edge cases are coming up, which is why new features are added to the protocol all the time. In those 15 years, they’ve had to adjust to technologies that didn’t exist back in 2008, like widespread adoption of 4k HDR displays, or Vulkan. Now imagine that, but with every aspect of a computer. In 2008, DDR3 RAM was just a year old. Today we’re on DDR5 and you (probably) can’t buy a new machine that takes DDR3. PCIE 2 was the latest shit in 2007. Now I see that PCIE 7 is planned for next year.

        A global corporation can support old products while also developing new technologies because they have unfathomable labor and capital at their beck and call.

        I think that free software can keep up with proprietary offerings because the barrier to entry is relatively low. You just need free time and a source control client. I think it would be different if your project toolchain involved literal tools that cost millions of dollars.