Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)F
Posts
24
Comments
409
Joined
2 yr. ago

  • That's how they're trying to sell it. But why did Elastic and Redis drop SSPL if it was so good, and why did OSI not accept it as open source? The answers are here but the TLDR is that SSPL is vague and, as a consequence, makes it risky to provide a service with the product, unless you are large enough to make a big lucrative deal with the owner of the product.

    This stifles competition and innovation.

    Case in point: Mongo DBAs are nearly non-existent outside California and managed MongoDB is much more expensive than managed PostgreSQL/MariaDB services, because it is only offered by 3 providers.

    https://www.ssplisbad.com/

  • Saying you are "MongoDB compatible" is IP violation?

    Meanwhile they are still actively opposing the creation of an open document database standard, which would make it unnecessary to use their brand name to indicate compatibility.

    They also sent Peter a "Cease And Desist" for saying MongoDB is not open source. They themselves retracted the SSPL from the OSI when it became clear it would be rejected because it is not open source.

    Wonder how much 💩 is in their heads for not realizing everyone gave up on SSPL, and that Postgres is thriving because of the permissive license: even the tiniest local managed services providers have a Postgresql service, there's tons of DBA talent available, and due to the competition in managed services, a managed postgres is much cheaper than managed MongoDB.

    They'll keep shooting themselves in the foot until someone else puts a lead shoe on it.

  • Shoutout to FerretDB doing God's work.

    Putting data from apps that were built for MongoDB into Postgres.

    https://github.com/FerretDB/FerretDB

    And their lived experience trying to help the MongoDB ecosystem by building an open standard for document databases:

    In 2021, we founded FerretDB with a bold vision: to return the document database market to its open source roots by creating the leading open source alternative to MongoDB, built on Postgres.

    For years, we tirelessly advocated for an open standard. We built a popular product, collaborated with Microsoft to open source DocumentDB, and held high-level meetings with cloud providers and stakeholders to make the case for a standard that is similar to SQL, but for document databases.

    In 2023, a MongoDB VP reached out to me. On a Zoom call, we were threatened with a lawsuit for building a compatible product. Being called a thief by a leader of a (then) $35B company was a moment of stark clarity on MongoDB's opinion about our work and the need for a standard. At the end of that call, I told them the industry would inevitably come together to create the open standard they refused to provide.

    Their response? "They would never do that. They are our trusted partners."

    Today, the market has spoken. The Linux Foundation has announced the adoption of the DocumentDB project 1] to create an open standard with MongoDB compatibility, the exact thing we were sued for earlier this year. 2]

    This is a monumental win for developers and enterprises everywhere. It validates the years of work we've poured into this mission.

    It is also telling that MongoDB's SSPL license has been abandoned by Elastic or Redis, the two prominent companies who were initially in favor of MongoDB's attempt to redefine open source. All clear signs that MongoDB's behavior is not appreciated by developers. [...]

    https://www.linkedin.com/posts/farkasp_in-2021-we-founded-ferretdb-with-a-bold-activity-7365677216912859136-jNNJ

  • Es ist geschrieben in Kuchython und Federiert.

  • Disclaimer: I haven't tried it in a while so I can't speak to the current quality of COSMIC and Pop!_OS 24.04

    But no, I am not surprised it's taken them this long. They started almost from scratch, made an entire Desktop Environment basically from scratch, using only the basic Iced for rendering and building their own equivalent of GTK/Qt. Libcosmic is massive undertaking and I have been worried about them.

    But it has enormous potential: they know how to do tiling and styling very well, and Rust makes it hard not to write secure performant code.

    I admire their bravery and perseverance and have faith that COSMIC will eventually be amazing.

    And I'll buy my next laptop from them to support them.

  • Cause systemd is pretty amazing 😎

    <Jumps behind cover>

  • CPU requests were filling up on my setup. Got a dirty cracked used Ideapad with 4C/8T (i5-8265U) and an NVMe SSD to reinforce my Talos Kuberbetes cluster. Cost €65.

    Upgraded it from 4GB soldered + 4GB stick RAM to 20GB RAM total. 16GB DDR4 sticks only cost €20 on the used market nowadays :)

    RAM upgrade done, still need to add it to the cluster.

    Then I'll install a nice observability stack: VictoriaMetrics, VictoriaLogs, Grafana, and set up alerting finally. Afterwards, I'm thinking of adding Karakeep.

  • You sounded too believable 😅

  • OK this may come across as weirdly radical: I don't actually consider doing X to non-human animals as bad as doing X to human animals. But this is vegancirclejerk so here we go:

    What if I reframe it a bit?

    Some owners love their slaves and really take good care of them before sending them off to a slaughterhouse at the end of their youth.

  • Yeah I'm all for being consistent in how the EU deals with war crimes, ongoing genocide and sanctions, and being less dependent on the USA.

    But Sanchez seems very close to China: I wouldn't want him to decide on these things

  • Virtualbox is VMs. This is containers.

    Containers are better, especially for a desktop: they are smaller, faster in every sense, and don't permanently hog a fixed part of the resources, instead scaling dynamically, just like any other process on the host.

  • And Alpine, the one @Sxan started with.

    Alpine has apk, and is (or it should be) the most used base for container images. It is very small, smaller than Debian, so containers built on it are secure and performant.

    If you've never worked with Docker/Podman/OCI containers, you've been missing a lot of good stuff, and you may have heard of Alpine via the amazing "I use Linux as my operating system" copypasta:


    "I use Linux as my operating system," I state proudly to the unkempt, bearded man. He swivels around in his desk chair with a devilish gleam in his eyes, ready to mansplain with extreme precision. "Actually", he says with a grin, "Linux is just the kernel. You use GNU+Linux!' I don't miss a beat and reply with a smirk, "I use Alpine, a distro that doesn't include the GNU Coreutils, or any other GNU code. It's Linux, but it's not GNU+Linux." The smile quickly drops from the man's face. His body begins convulsing and he foams at the mouth and drops to the floor with a sickly thud. As he writhes around he screams "I-IT WAS COMPILED WITH GCC! THAT MEANS IT'S STILL GNU!" Coolly, I reply "If windows were compiled with GCC, would that make it GNU?" I interrupt his response with "-and work is being made on the kernel to make it more compiler-agnostic. Even if you were correct, you won't be for long." With a sickly wheeze, the last of the man's life is ejected from his body. He lies on the floor, cold and limp. I've womansplained him to death.

  • You are absolutely right. Baffles me why they'd put 128GB of RAM in there and use an SoC arhitecture where RAM is shared with GPU, to the detriment of upgradability, if not for AI.

    Any gamer would prefer upgradable RAM and upgradable GPU, especially from Framework.

    How else would you explain this decision to compromise their brand values and overspend on RAM, if not AI?

  • What are the European alternatives?

  • For gaming, yes.

    But the Framework Desktop seems to be made for a different usecase:

    128GB of unified RAM

    Unified means both CPU and GPU have access to it. Why would you need so much RAM and why would you care if the GPU has direct access?

    Neural Networks. You can fit a pretty serious LLM into 128GB of RAM, and if GPU has direct access, still run inference at reasonable performance.

    I love my 9070XT, but you can't run anything approaching what Claude or ChatGPT gives you in just 16GB of VRAM

  • Yeah it's been great for a few years but it's slowly falling apart. I've been putting off getting a replacement as it looks like the only options are downgrading or triple the cost (or more)

  • I use a Reverb G2 which is directly driven by the host pc through OpenXR, no compression or onboard processing at all.

    I only use it for flight sims.

  • Understood, I'm wired so no compression. It works well for me

  • Valid. For any non-local AI, your data will need to be decrypted at some point on some server. This is inherently less secure than Proton's traditional services, which are built on the idea that that can never happen.

    Still, this is a smaller privacy nightmare than most AI services