Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)M
Posts
18
Comments
455
Joined
2 yr. ago

  • I'll say it again and again. The problem is neither Linus, nor Kent, but the lack of resources for independent developers to do the kind of testing that is expected of the big corporations.

    Like, one of the issues that Linus yelled at Kent about was that bcachefs would fail on big endian machines. You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram...

    But the big corpos are different. They have these massive CI/CD systems, which automatically build and test Linux on every architecture under the sun. Then they have an extra, internal review process for these patches. And then they push.

    But Linux isn't like that for independent developers. What they do, is just compile the software on their own machine, boot into the kernel, and if it works it works. This is how some of the Asahi developers would do it, where they would just boot into their new kernel on their macs, and it's how I'm assuming Overstreet is doing it. Maybe there is some minimal testing involved.

    So Overstreet gets confused when he's yelled at for not having tested on big endian architectures, because where is he supposed to get a big endian machine that he can afford that can actually compile the linux kernel in less than 10 years? And even if you do buy or emulate a big endian CPU, then you'll just get hit with "yeah your patch has issues on machines with 2 terabytes or more of ram" and yeah.

    One option is to drop standards. The Asahi developers were allowed to just merge code without being subjected to the scrutiny that Overstreet has been subjected to. This was in part due to having stuff in rust, and under the rust subsystem — they had a lot more control over the parts of Linux they could merge too. The other was being specific to macbooks. No point testing the mac book-specific patches on non-mac CPU's.

    But a better option, is to make the testing resources that these corporations use, available to everybody. I think the Linux foundation should spin up a CI/CD service, so people like Kent Overstreet can test their patches on architectures and setups they don't have at home, and get it reviewed before it is dumped to the mailing list — exactly like what happens at the corporations who contribute to the Linux kernel.

  • There is uksmd for ram dedupe.

  • It wouldn't, I don't think. Secure boot is the bios/uefi verifying the rest, but this replaces the bios/uefi with something malicious.

  • Deleted

    Permanently Deleted

    Jump
  • No, the duckstation dev obtained the consent of contributors and/or rewrote all GPL code.

    https://www.gamingonlinux.com/2024/09/playstation-1-emulator-duckstation-changes-license-for-no-commercial-use-and-no-derivatives/

    I have the approval of prior contributors, and if I did somehow miss you, then please advise me so I can rewrite that code. I didn't spend several weekends rewriting various parts for no reason. I do not have, nor want a CLA, because I do not agree with taking away contributor's copyright.

  • at just a glance I have some theories:

    1. the project was named after the creator. Maybe they wanted it to seem more community organized
    2. Food reference. Foss developers often name stuff after food, Idk why. Maybe cuz they like mangos (I do too).
    3. Dodges copyright or trademark issues. Certain things, like town names (wayland is a town in the US) are essentially uncopyrightable/trademarkable, so by naming your project after those you eliminate a whole host of potential legal issues.

    These are just theories though. No reason is actually given, at least not that I could find based on 30s of searching.

  • This worked for me. Thankfully, I didn't have a hard crash during an update, so my system proceeded to boot normally.

    The craziest part is that I didn't google this. My computer crashed, I rebooted it via magic sysrq keys, and then booted to an error.

    I went on Lemmy on my phone out of frustration and by sheer chance one of the first things I saw was a solution.

  • I think I'm getting the same error as OP, and booting from a snapshot sadly does not work.

  • No, because they don't deviate enough from arch to avoid issues with breakages on updates. Just recently on lemmy someone was wondering why all their vlc plugins were uninstalled. Easy fix for someone who knows how to use pacman, but that and similar incidents make cachyos not really a "just works" system.

  • Many helm charts, like authentik or forgejo integrate bitnami helmcharts for their databases. So that's why this is concerning to me,

    But, I was planning to switch to operators like cloudnativepostgres for my databases instead and disable the builtin bitnami images. When using the builtin bitnami images, automatic migration between major releases is not supported, you have to do it yourself manually and that dissapointed me.

  • Does that happen often? I had, apparently incorrectly, assumed those things were more or less fire and forget.

    Bootloaders are also software affected by vulnerabilities (CVE's). But this comment did make me curious. Do the CVE's that affect grub, would a person of threat model/usecase 1 in my comment above care about them?

    Many of them do indeed seem to non issues. From the list here.

    Grub CVE's requiring the config file to be malicious, like this one are pretty much non issues. The config file is encrypted, in my setup at least (but again, not the default. Also idk if the config file is signed/verified).

    I think this one is somewhat concerning. USB devices plugged in could corrupt grub.

    Someone could possibly do something similar with hard drives, replacing the one in the system. The big theoretical vulnerability I am worried about is someone crafting a partition in such a way that it does RCE through Grub. Or maybe's it's already happened, my research isn't that deep. But with such a vulnerability, someone could shrink the EFI partition and then put another partition there, that grub reads, and then the code execution exploit happens.

    But honestly, if someone could replace/modify hard drives, or add/remove USB devices, what if they just replace your entire system motherboard with a malicious one? This is very difficult to defend against, but you could check for it happening by having your motherboard be password protected, and you always log into your motherboard whenever you boot to make sure the password is the same. (Although perhaps someone could copy over the hashes (at least I am assuming the passwords are hashed) from one motherboard to another).

    But if something like that is in your threat model, it's important to note that ethernet, and many other firmware is proprietary (meaning you cannot audit or modify the code), and also has what's called "DMA" — direct memory access. It can read and write to the Linux kernel with permissions higher than root. So if I have access to your device, I could replace your wifi card with a malicious one that modifies stuff after you boot or does any other things.

    What you are supposed to do is prevent tampering in the first place, or for a much cheaper cost, have "tamper evident protection", things that inform you if the system was tampered with. Stickers over the screws are an easy and cheap example..

    But DefCon has a village dedicated to breaking tamper evident protection. Lol.

    I think if your adversary is a nationstate, secure boot usecase 1 is simply broken and doesn't work. It's too easy to replace any of the physical components with malicious one's for them, because there is no verification of those. I think Secure Boot usecase 1 is for protecting against corporate espionage in mid to high tier corpos. Corporations also tend to give people devices, and they can ensure that those devices have tamper evidence/tamper resistance on top of secure boot. Of course I think a nationstate can get through them, but I don't think it's included in the threat model.

    Nationstates can easily break the system of secure boot, and probably have methods in addition to or separate from secure boot for protecting themselves.

    Wait what, that just seems like home directory encryption with extra steps 🤦 I guess I’ll go back to Veracrypt then.

    Performance on LUKS might be better since LUKS is a first class citizen. But maybe performance with veracrypt is better since only the home directory is encrypted. I tried duckduckgo but the top results were AI slop with no benchmarks so I'm not gonna bother doing further research.

  • I'm on my phone rn and can't write a longer post. This comment is to remind me to write an essay later. I've been using authentik heavily for my cybersecurity club and have a LOT of thoughts about it.

    The tldr about authentik's risk of enshittification is that authentik follows a pattern I call "supportware". It's when extremely (intentionally/accidentally) complex software (intentionally/accidentally) lacks edge cases in their docs,because you are supposed to pay for support.

    I think this is a sustainable business model, and I think keycloak has some similar patterns (and other Red Hat software).

    The tldr about authentik itself is that it has a lot of features, but not all of them are relevant to your usecase, or worth the complexity. I picked up authentik for invites (which afaik are rare, also official docs about setting up invites were wrong, see supportware), but invites may not something you care about.

    Anyway. Longer essay/rant later. Despite my problems, I still think authentik is the best for my usecase (cybersecurity club), and other options I've looked at like zitadel (seems to be more developer focused),or ldap + sso service (no invites afaik) are less than the best option.

    Sidenote: Microsoft entra is offers similar features to what I want from authentik, but I wanted to self host everything.

  • but wouldn’t only the bootloader need to be signed

    So the bootloader also gets updated, and new versions of the bootloader need to get signed. So if the BIOS is responsible for signing the bootloader, then how does the operating system update the bootloader?

    To my understanding a tamper-proof system already assumes full disk-encryption anyway

    Kinda. The problem here, IMO, is that Secure boot conflates two usecases/threat models into one:

    1. I am a laptop owner who wants to prevent tampering with the software on my system by someone with physical access to my device
    2. I am a server operator who wants to enforce usage of only signed drivers and kernels. This locks down modification/insertion of drivers and kernels as a method of obtaining a rootkit on my servers.

    The second person does not use full disk encryption, or care about physical security at all, really (because they physically lock up the server racks).

    What happens in this setup is that the bootloader checks the kernel's signature, and the kernel checks the driver's signature... and they enable this feature depending on whether or not the secure boot EFI motherboard variable is enabled. So this feature isn't actually tied to the motherboard's ability to verify the bootloader. For example, grub has it's own signature verification that can be enabled seperately.

    The first person does not have malware in their system in their threat model. So they can enable full disk encryption, and then they don't care about the kernel and drivers being signed.

    EXCEPT THEY ACTUALLY DO BECAUSE NOBODY DOES THE SETUP WHERE THE KERNELS AND DRIVERS ARE ENCRYPTED BY DEFAULT.

    You must explicitly ask for this setup from the Linux distro installers (at least, all the one's I've used). By default, /boot, where the kernel and drivers are stored, is stored unencrypted in another external partition, and not in the LUKS encrypted partition.

    What I do, is I have /boot/efi be the external EFI partiion. /boot/efi is where the bootloader is installed, and the kernels are stored in /boot, which is located on my encrypted BTRFS partition. The grub bootloader is the only unencrypted part of my system, like the setup you suggested. But I had to ask for this by changing the partitioning scheme on CachyOS, and on other distros I used before this one.

    Very interestingly about this setup, is that grub cannot see the config it needs to boot. It guesses at which disk it should decrypt, and if I have a usb drive plugged in, it guesses wrong and my system won't boot.

    Continuing, the problem with setups like this is that in order to verify the bootloader, you must have secure boot enabled. Grub will then read this EFI configuration, and attempt to verify the kernels and drivers. As far as I can tell, there is no way to disable this other than changing the source code or binary patching grub.

    I have a blog post where I explored this: https://moonpiedumplings.github.io/playground/arch-secureboot/index.html

    So this means that even in setups where everything is encrypted except grub, you still have to sign the kernels and drivers in order to have a bootable system (unless you patch grub).

    I eventually decided that this wasn't worth it, and gave up on secure boot for now.

  • So Signal does not have reproducible builds, which are very concerning securitywise. I talk about it in this comment: https://programming.dev/post/33557941/18030327 . The TLDR is that no reproducible builds = impossible to detect if you are getting an unmodified version of the client.

    Centralized servers compound these security issues and make it worse. If the client is vulnerable to some form of replacement attack, then they could use a much more subtle, difficult to detect backdoor, like a weaker crypto implementation, which leaks meta/userdata.

    With decentralized/federated services, if a client is using other servers other than the "main" one, you either have to compromise both the client and the server, or compromise the client in a very obvious way that causes the client to send extra data to server's it shouldn't be sending data too.

    A big part of the problem comes with what Github calls "bugdoors". These are "accidental" bugs that are backdoors. With a centralized service, it becomes much easier to introduce "bugdoors" because all the data routes through one service, which could then silently take advantage of this bug on their own servers.

    This is my concern with Signal being centralized. But mostly I'd say don't worry about it, threat model and all that.

    I'm just gonna @ everybody who was in the conversation. I posted this top level for visibility.

    @Ulrich@feddit.org @rottingleaf@lemmy.world @jet@hackertalks.com @eleitl@lemmy.world @Damage@feddit.it

    EDIT: elsewhere in the thread it is talked about what is probably a nation state wiretapping attempt on an XMPP service: https://www.devever.net/~hl/xmpp-incident

    For a similar threat model, signal is simply not adequate for reasons I mentioned above, and that's probably what poqVoq was referring to when he mentioned how it was discussed here.

    The only timestamps shared are when they signed up and when they last connected. This is well established by court documents that Signal themselves share publicly.

    This of course, assumes I trust the courts. But if I am seeking maximum privacy/security, I should not have to do that.

  • https://www.devever.net/~hl/xmpp-incident

    This article discusses some mitigations.

    You an also use a platform like simplex or the tor routing ones, but they aren't going to offer the features of XMPP. It's better to just not worry about it. This kind of attack is so difficult to defend against that it should be out of the threat model of the vast majority of users.

  • Ubunti used to have a a tool that did something similar, but that tool is dead now.

    I'm very happy to see a successor.

  • Straying away from utilities, games are always fun to host. I got started with self hosting by hosting a minecraft server, but there are plenty of options.

  • So instead you decided to go with Canonical's snap and it's proprietary backend, a non standard deployment tool that was forced on the community.

    Do you avoid all containers because they weren't the standard way of deploying software for "decades" as well? (I know people that actually do do that though). And many of my issues about developers and vendoring, which I have mentioned in the other thread I linked earlier, apply to containers as well.

    In fact, they also apply to snap as well, or even custom packages distributed by the developer. Arch packages are little more than shell scripts, Deb packages have pre/post hooks which run arbitrary bash or python code, rpm is similar. These "hooks" are almost always used for things like installing. It's hypocritical to be against curl | bash but be for solutions like any form of packages distributed by the developers themselves, because all of the issues and problems with curl | bash apply to any form of non-distro distributed packages — including snaps.

    You are are willing to criticize bash for not immediately knowing what it does to your machine, and I recognize those problems, but guess what snap is doing under the hood to install software: A bash script. Did you read that bash script before installing the microk8s snap? Did you read the 10s of others in the repo's used for doing tertiary tasks that the snap installer also calls?

    # Try to symlink /var/lib/calico so that the Calico CNI plugin picks up the mtu configuration.

    The bash script used for installation doesn't seem to be sandboxed, either, and it runs as root. I struggle to see any difference between this and a generic bash script used to install software.

    Although, almost all package managers have commonly used pre/during/post install hooks, except for Nix/Guix, so it's not really a valid criticism to put say, Deb on a pedestal, while dogging on other package managers for using arbitrary bash (also python gets used) hooks.

    But back on topic, in addition to this, you can't even verify that the bash script in the repo is the one you're getting. Because the snap backend is proprietary. Snap is literally a bash installer, but worse in every way.