Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)E
Posts
0
Comments
201
Joined
1 yr. ago

  • You can put it in the dishwasher to clean it. Just make sure to dry it and oil it a bit afterwards, otherwise it will rust. In most countries, this is covered by structured teaching in chemistry, contained within the concept of ”school”.

  • You can probably pay for a dishwasher.

  • For music production on a hobby level? Linux is not what you want.

    The VST availability is abysmal. For a DAW, you can choose between Reaper and Ardour. Both are reasonably good, but without decent third party VSTs you’ll suffer. You won’t get iLok working, you won’t get any commercial plugins working. Your old project files won’t open.

    Now, if you are exclusively working with Airwindows plugins (look it up!) in Reaper, you could get away with a Linux migration. Cakewalk and Ableton? Not a chance in hell.

    Go buy a cheap used 16GB M1 Mac Mini. Music production stuff ”just works”. Given your config, looks like that could be within budget. Or upgrade your old machine to Windows 11, pick your poison.

  • Fine, take the structured approach to ”Linux”:

    • 3-5 years of university studies with a well designed curriculum, including operating systems basics, networking, security, data structures and compilers. This will get you the basic stuff you need to know to further delve into ”Linux”.
    • Add MIT’s ”Missing Semester” online course. This will get you more proficient in practice.
    • Go grab a RedHat certification (or don’t, it’s not worth the paper it’s printed on). This will ensure you have a paper certifying you are sufficiently indoctrinated. It’s also a structured course in Linux.
    • Go do stuff with your newly acquired knowledge and gradually build up your competences.

    If that investment seems a bit steep, take only the last step, build a homelab and take a structured approach to any interesting subjects you encounter doing that.

  • Structured approach to what? You don’t take a structured approach to a hammer, you use it as a tool to accomplish something.

    ”The Linux Programming Interface” is an excellent book, if you are interested in interacting with the Linux kernel directly, but somehow I doubt that’s what OP wants to do. I doubt OP knows what he wants to do.

    Besides, please note that I did encourage taking a structured approach to stuff discovered on the way. But taking a structured approach to ”Linux” is just a bad idea, it’s far to broad of a topic.

    Edit: RedHat has their certification programs. These are certainly structured. You’ll get to know RedHat and the RedHat^{TM} certified way of doing things. That’s probably the closest thing to what OP wants. You even get a paper at the end if you pay up. This is not the most efficient way to get proficient.

  • You are probably approaching this from the wrong angle. Linux, and computers in general, are tools. Figure out what you want to use it for, and then do it. One example would be to build a homelab with jellyfin and nextcloud.

    On the path to that goal, you’ll find problems and tasks for which there exists very nice structured resources. For example, you might want some security, a perfect opportunity to read a book on networking and firewalls.

  • Everytime someone says something positive about BTRFS I’m compelled to verify whether RAID6 is usable.

    The RAID 5 and RAID 6 modes of Btrfs are fatally flawed, and should not be used for "anything but testing with throw-away data."

    Alas, no. The Arch wiki still contains the same quote, and friends don’t let friends store data without parity.

    So in the end, the best BTRFS can do right now is running RAID10 for a storage efficiency of 50%. Running dedup on that feels a bit wasteful…

    (Sidenote: actually, ZFS runs dedup after per block compression, so it can only dedup blocks that are identical. Still works though, unlike when people do user level .tar.gz-style compression. The it’s game over.)

  • Yup. Apparently got much better last year, but don’t turn it on unless you know what you ate doing.

  • No idea how flatpak or snap works here (I want my rpm:s dammit) but I bet someone started adding compression to something at some point.

    You can’t deduplicate already compressed data, except in theory. If you want deduplication, do that first, then compress the data. (i.e. use ZFS. Friends don’t let friends use subpar filesystems.)

  • Without that, the UN would be even more useless. It’s a discussion forum, not a federation. You need to have ”evil” at the table. And nuclear capable evil won’t sit at the table unless they get a big veto to swing around.

    But they are in fact at the table. Communicating. Most likely in bad faith, but still communicating and talking.

  • Lustre 2.16 got released recently, so in a year or so you may actually be able to run commercially supported Lustre with IPv6 support. Yay!

    After that, it’s only a matter of time before it’s finally possible to start testing supercomputers with IPv6! (And finally building a production system with IPv6 a few more years after that, when all the bugs have been squashed)

    Look at the Top500 list. Fucking everyone runs Lustre somewhere, and usually old versions. The US strategic nuclear weapons research is practically all on Lustre. My guess is most weather forecasting globally runs on Lustre. (Oh, and a shitton of AI of course.)

    Up until now, you were stuck with mounting your filesystem over IPv4 (well, kinda IPv4 over RDMA, ish). If you want commercial support for your hundreds of petabytes (you do), you still can’t migrate. And this isn’t a small indie project without testers, it’s commercially supported with billions in revenue, supporting compute hardware for even more money.

    My point with this rambling is that a open source software that is this widely deployed, depended upon and this well funded, still failed to roll out IPv6 support until now. The long tail of migrating the world to IPv6 hasn’t even begun yet, we are still in the early days. Soon someone will start looking at the widely deployed, depended upon and badly funded stuff.

    And maybe, if IPv6 didn’t try to change a bunch of extra stuff, we’d be further along. (Though, in the specific case of Lustre, I’ll gladly accuse DDN and Whamcloud for being incompetent…)

  • In the real world, addresses are an abstraction to provide knowledge needed to move something from point A to point B. We could use coordinates or refer to the exact office the recipient sits in, but we don’t. Actually, we usually try to keep it at a fairly high level of abstraction.

    The analogy is broken, because in the real world, we don’t want extremely exact addressing and transport without middlemen. We want abstract addresses, with transport routing partially to fully decoupled from the addressing scheme. GP provides a nice argument for IPv4.

    I know how NAT works, but we are working within the constraints of a very broken analogy here. Also yes, internal logistics can and will be the harbinger of unnecessary bureaucracy, especially when implemented correctly.

  • And yet, in the real world we actually use distribution centers and loading docks, we don’t go sending delivery boys point to point. At the receiving company’s loading docks, we can have staff specialise in internal delivery, and also maybe figure out if the package should go to someone’s office or a temporary warehouse or something. The receiver might be on vacation, and internal logistics will know how to figure out that issue.

    Meanwhile, the point-to-point delivery boy will fail to enter the building, then fail to find the correct office, then get rerouted to a private residence of someone on vacation (they need to sign personally of course), and finally we need another delivery boy to move the package to the loading dock where it should have gone in the first place.

    I get the ”let’s slaughter NAT” arguments, but this is an argument in favour of NAT. And in reality, we still need to have routing and firewalls. The exact same distribution network is still in use, but with fewer allowances for the recipient to manage internal delivery.

    Personal opinion: IPv6 should have been almost exactly the same as IPv4, but with more numbers and a clear path to do transparent IPv6 to IPv4 traffic without running dual stack (maybe a NAT?). IPv6 is too complex, error prone and unsupported to deploy without shooting yourself in the foot, even now, a few decades after introduction.

  • Deleted

    Permanently Deleted

    Jump
  • Most arch users are casuals that finally figured out how to read a manual. Then you have the 1% of arch users who are writing the manual…

    It’s the Gentoo and BSD users we should fear and respect, walking quietly with a big stick of competence.

  • Yeah, that’s the thing.

    The gaming market only barely exists at this point. That’s why Nvidia can ignore the gaming market for as long as they want to.

  • Pheasants gamers buy cheap inference cards gaming cards.

    The absolute majority of Nvidias sales globally are top-of-the-line AI SKUs. Gaming cards are just a way of letting data scientists and developers have cheap CUDA hardware at home (while allowing some Cyberpunk), so they keep buying NVL clusters at work.

    Nvidia’s networking division is probably a greater revenue stream than gaming GPUs.

  • I have fucked around enough with R’s package management. Makes Python look like a god damn dream. Containers around it is just polishing a turd. Still have nightmares from building containers with R in automated pipelines, ending up at like 8 GB per container.

    Also, good luck getting reproducible container builds.

    Regarding locales - yes, I mentioned that. Thats’s a shitty design decision if I ever saw one. But within a locale, most Excel documents from last century and onwards should work reasonably well. (Well, normal Excel files. Macros and VB really shouldn’t work…). And it works on normal office machines, and you can email the files, and you can give it to your boss. And your boss can actually do something with it.

    I also think Excel should be replaced by something. But not R.

  • R, the language where dependency resolution is built upon thoughts and prayers.

    Say what you want about Excel, but compatibility is kinda decent (ignoring locales and DNA sequences). Meanwhile, good luck replicating your R installation on another machine.