Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)D
Posts
0
Comments
209
Joined
3 yr. ago

  • The difference is rolling vs stable release.

    Debian 13 is out, and it will stay exactly the same Debian 13 that it was when it released, even 5 years from now. The only changes are bugfixes, security patches, etc. No new features. This means you can basically do unattended sudo apt update && sudo apt upgrade with no problems. By the time Debian 14 comes out, there will have been a ton of changes to upstream software, Updating from 13 to 14 might be a one-click fix, or it might take effort fixing configs and ensuring the new software works.

    Arch Linux is rolling release, it does not have version numbers, and does not hold back a major package update just "because it changes things". This means basically every update might change things, and that can require intervention. If the Arch Linux team is aware of required intervention, it will be put on the Arch News. This is often just one or two commands. The possibility of intervention being required means unattended upgrades are a no-go on Arch, but that's pretty much it.

    If you don't update your system for say, a year, everything that's changed in that time will change all at once. This is often still a few commands to fix, but could be more depending on what updated exactly. Updating regularly is reccomended, because it's easier to tell what exactly changed between updates, and thus easier to track down where a problem originates from.

  • Depends entirely on the device. On most desktops, you should be able to. On a lot of laptops, this may leave them in an unbootable state (due to GPU option ROMs).

    Check for your specific hardware before removing factory default secure boot keys.

  • This is heavily sensationalized. UEFI "secure boot" has never been "secure" if you (the end user) trust vendor or Microsoft signatures. Alongside that, this ""backdoor"" (diagnostic/troubleshooting tool) requires physical access, at which point there are plenty of other things you can do with the same result.

    Yes, the impact is theoretically high, but it's the same for all the other vulnerable EFI applications MS and vendors sign willy-nilly. In order to get a properly locked-down secure boot, you need to trust only yourself.

    When you trust Microsoft's secure boot keys, all it takes is one signed EFI application with an exploit to make your machine vulnerable to this type of attack.

    Another important part is persistence, especially for UEFI malware. The only reason it's so easy is because Windows built-in "factory reset" is so terrible. Fresh installing from a USB drive can easily avoid that.

  • I'm running on postmarketOS, SailfishOS includes significant proprietary components, besides firmware. Like the user-interface. My Android daily driver is already running strict FOSS-only ROM and apps (with an exception in firmware), there's no reason for me to switch to sometbing proprietary.

  • Yes, it's called Sober. It is not official, and may lose functionality at any time due to updates to client-side anti-cheat.

  • Nobody is helping Google do anything, phone OEMs develop their own private spin on Android (for example, Samsungs OneUI). They make sure their device works in their OS, nothing else.

  • Depending on your bank, you may be able to use their website.

    The "no apps" isn't "that big of an issue" (at least for me), as there's Waydroid available, and it's just standard Linux with all the desktop apps right from Flathub. There's also plenty of webapps available.

    There's tons of other issues with Linux mobile, like general usability, battery life, responsiveness (especially when receiving calls or notifications), and hardware support.

    The biggest one I'm running into is sleep states. I can either have 4-ish hours of battery life, but if my phone is charged, I get notifications, alarms go off, and calls come in immediately. Or I can have about a day idle battery life, but have to check my phone before any of that stuff comes in.

    There's also the fact I use my phone for media a lot (Jellyfin, Lemmy), and the experience isn't great on Linux mobile. "Apps" integtate less with each other, and video playback is kind of a mess. (For example, I can't "share" on a photo from Lemmy to send it to a friend on Matrix).

  • While Mint is an Ubuntu-based distro, it tries to un-fuck the worst of Canonical. Other Ubuntu spins with a different desktop environment don't do this, like Xubuntu, Kubuntu, etc. They end up as just Ubuntu on a different DE, with all the decisions made by canonical.

    Base Debian might work, but afaik, is "not as beginner friendly" compared to Mint.

  • Movies like Terminator have "AGI", or artificial generalized intelligence. We had to come up with a new term for it after LLM companies kept claiming they had "AI". Technically speaking, large language models fall under machine learning, but they are limited to just predicting language and text, and will never be able to "think" with concepts or adapt in real time to new situations.

    Take for example chess. We have stockfish (and other engines), that far outperform any human. Can these chess engines "think"? Can they reason? Adapt to new situations? Clearly not, for example, adding a new piece with different rules would require stockfish to re-train from scratch. Humans can take their existing knowledge and adapt it to the new situation. Also look at LLMs attempting to play chess. They can "predict the next token" as they were designed to, but nothing more. They have been trained on enough chess notation that the output is likely a valid notation, but they have no concept of what chess even is, so they will spit out nearly random moves, often without following rules.

    LLMs are effectively the same concept as chess engines. We just put googly eyes on the software, and now tons of people are worried about AI taking over the world. While current LLMs and generative AI do pose a risk (overwhelming amounts of slop and misinformation, which could affect human cultural development. And a human deciding to give an LLM external influence on anything, which could have major impact), it's nowhere near Terminator-style AGI. For that to happen, humans would have to figure out a new way of thinking about machine learning, and there would have to be several orders of magnitude more computing resources for it.

    Since the classification for "AI" will probably include "AGI", there will (hopefully) be legal barriers in place by the time anyone develops actual AGI. The computing resources problem is also gradual, an AGI does not simply "tranfer itself onto a smartphone" in the real world (or an airplane, a car, you name it). It will exist in a massive datacenter, and can have its power shut off. If AGI does get created, and causes a massive incident, it will likely be during this time. This would cause whatever real world entity created it to realize there should be safeguards.

    So to answer your question: No, the movies did not "get it right". They are overexaggerated fantasies of what someone thinks could happen by changing some rules of our current reality. Artwork like that can pose some interesting questions, but when they're trying to "predict the future", they often get things wrong that changes the answer to any questions asked about the future it predicts.

  • Firefox is able to do this for basic PDF annotations. It's not very extensive, but it's very simple to use (and you probably already have it installed).

  • Corporate social media requires making a profit to keep running. No matter how good it looks at the start, the main goal of a corporate social media is never to provide the best possible service to end users. The things you get to see and how you interact are not driven by interests and real friends, but by what gets the platform the most profit.

    Obligatory "AI bad". You should post what you spent effort writing, instead of letting a large language model subtly change its meaning.

  • It is only a partial upgrade if you update your databases, without upgrading the rest of your system. If you try to pacman -S firefox, and it gives you a 404, you have to both update your pacman databases, and upgrade your packages. This will only give you a 404 if you cleaned your package cache, and your package is out of date. Usually, -S on an already installed package will reinstall it from cache. This does not cause a partial upgrade.

    If you run pacman -Sy, everything you install is now considered a partial upgrade, and will break if you don't know exactly what you're doing. In order to avoid a partial upgrade, you should never update databases (-Sy) without upgrading packages (-Su). This is usually combined in pacman -Syu.

  • and had to delete, update, and then rebuild half my system just to update the OS because the libraries were out of sync.

    This does not just happen with proper use of pacman. The most common situation where this does happen is called a "partial upgrade", which is avoidable by simply not running pacman -Sy. (The one exception is for archlinux-keyring, though that requires you run pacman -Syu afterwards).

    Arch is definitely intended for a certain audience. If you don't intend on configuring your system on the level Arch allows you to, then a different distro might be a better option. That does not mean it's a requirement, you can install KDE, update once a month, and almost never have to worry about system maintenance (besides stuff that is posted on Archlinux news, once or twice a year, usually a single command).

  • If you want to learn, go for it! Although if you're running anything important, be sure you've got backups, and can restore your system if needed. I wouldn't personally worry about the future of NixOS. If the project "goes the wrong way", it's FOSS, someone will fork it.

    I've considered Proxmox, but immediately dismissed it (after light testing) due to the lack of control over the host OS. It's just Debian with a bunch of convenience scripts and config for an easy libvirt experience. That's amazing for a "click install and have it work" solution, but can be annoying when doing something not supported by the project, as you have to work around Proxmox tooling.

    After that, I checked my options again, keeping in mind the only thing the host OS needs is KVM/libvirt, and a relatively modern kernel. Since it's not intended to run any actual software besides libvirt, stability over quick releases is way more important. I ended up going with Alpine Linux for this, as it's extremely light-weight (no systemd, intended for IoT), and has both stable and rolling release channels.

    It is significantly more setup to use libvirt directly. After installation, Proxmox immediately allows you to get going. Setting up libvirt yourself requires effort. I personally use "Virtual Machine Manager" as a GUI to manage my VMs, though frequently use the included "virsh" too.

  • Not bluetooth though. That's now a menu of your recently paired and currently connected devices.

    Not to mention the A12+ increase in spacing and button size. Removing that, you could easily fit 2x the buttons in the same area.

  • Is there anything stopping viruses from doing virus things?

    Usually that's called sandboxing. AUR packages do not have any, if you install random AUR packages without reading them, you run the risk of installing malware. Using Flatpaks from Flathub while keeping their permissions in check with a tool like Flatseal can help guard against this.

    The main difference is that even with the AUR being completely user submitted content, they're centralized repositories, unlike random websites. Malware on the AUR is significantly less common, though not impossible. Using packages that have a better reputation will avoid some malware, simply because other people have looked at the same package.


    There is no good FOSS Linux antivirus (that also targets Linux). Clamav "is the closest", though it won't help much.

  • After GRUB unlocks /boot and boots into Linux proper, is there any way to access /boot without unlocking again?

    No. The "unlocking" of an encrypted partition is nothing more than setting up decryption. GRUB performs this for itself, loads the files it needs, and then runs the kernel. Since GRUB is not Linux, the decryption process is implemented differently, and there is no way to "hand over" the "unlocked" partition.

    Are the keys discarded when initramfs hands off to the main Linux system?

    As the fs in initramfs suggests, it is a separate filesystem, loaded in ram when initializing the system. This might contain key files, which can be used by the kernel to decrypt partitions during boot. After booting (pivoting root), the keyfiles are unloaded, like the rest of initramfs (afaik, though I can't directly find a source on this rn). (Simplified explanation) The actual keys are actively used by the kernel for decryption, and are not unloaded or "discarded", these are kept in memory.

    If GRUB supports encrypted /boot, was there a 'correct' way to set it up?

    Besides where you source your rootfs key from (in your case a file in /boot), the process you described is effectively how encrypted /boot setups work with GRUB.

    Encryption is only as strong as the weakest link in the chain. If you want to encrypt your drive solely so a stolen laptop doesn't leak any data, the setup you have is perfectly acceptable (though for that, encrypted /boot is not necessary). For other threat models, having your rootfs key (presumably LUKS2) inside your encrypted /boot could significantly decrease security, as GRUB (afaik) only supports LUKS1.

    Or am I left with mounting /boot manually for kernel updates if I want to avoid steps 3 and 4?

    Yes, although you could create a hook for your package manager to mount /boot on kernel or initramfs regeneration. Generally, this is less reliable than automounting on startup, as that ensures any change to /boot is always made to the boot partition, not accidentally to a directory om your rootfs, even outside the package manager.


    If you require it, there are "more secure" ways of booting than GRUB with encrypted /boot, like UKIs with secure boot (custom keys). If you only want to ensure a stolen laptop doesn't leak data, encrypted /boot is a hassle not worth setting up (besides the learning process itself).

  • The main oversimplification is where browsers "just visit websites", SSH can be really powerful. You can send/receive files with scp, or even port forward with the right flags on ssh. If you stick to ssh user@host without extra flags, the only thing you're telling SSH to do is set up a text connection where your keyboard input gets sent, and some text is received (usually command output, like from a shell).

    As long as you understand what you're asking SSH to do, there's little risk in connecting to a random server. If you scp a private document from your computer to another server, you've willingly sent it. If you ssh -R to port forward, you've initiated that. The server cannot simply tell your client to do anything it wants, you have to do this yourself.