Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)K
Posts
8
Comments
212
Joined
2 mo. ago

  • Just a small number of base images (ubuntu:, alpine:, debian:) are routinely synced, and anything else is built in CI from Containerfiles. Those are backed up. So as long as backups are intact can recover from loss of the image store even without internet.

    I also have a two-tier container image storage anyway which gives redundancfor the built images but thats more of a side-effect of workarounds.. Anyway, the "source of truth" docker-registry which is pushed to is only exposed internally to the one who needs to do authenticated push, and to the second layer of pull-through caches which the internal servers actually pull from. So backups aside, images that are in active use already at least three copies (push-registry, pull-registry, and whoevers running it). The mirrored public images are a separate chain alltogether.

    This has been running for a while so all handwired from component services. A dedicated Forgejo deployment looks like it could serve for a large part of above in one package today. Plus it conveniently syncs external git dependencies.

  • If not for political reasons then why limit first version to Google/GitHub rather than starting with generic OIDC (which should include those two anyway)?

    We also took your feedback seriously and we are now implementing proper sign-in options like: Google GitHub (and more coming later)

  • Sounds like you have a stable life and infra needs and either very lucky or really good with backups and keeping secondaries around. Good on you.

  • The advantage to using something like terraform is repeatability, reliability across environments and roll-backs.

    Very valuable things for a stress-free life, especially if this is for more than just entertainment and gimmicks.

    I'd rather stare at the terminal screen for many hours of my choosing than suddenly having to do it at a bad time for one.. 2... 3... (oh god damn the networking was relying on having changed that weird undocumented parameter i forgot about years ago wasnt it) hours. Oh, and a 0-day just dropped for that service you're running running on the net. That you built from source (or worse, got from an upstream that is now mia). Better upgrade fast and reboot for that new kern.. She won't boot again. The bootdrive really had to crap out right now didn't it? Do we install everything from scratch, start Frankensteining or just bring out the scotch at this point?

    Also been at this for a while. I never regretted putting anything as infra-as-code or config management. Plenty of times I wish I had. But yeah, complexity can be insiduous. Going for High Availability and container cluster service mesh across the board was probably a mistake on the other hand...

  • Hyprland and Niri aren't even DEs. That's up to the user to sort out, if they want one. So yeah not the best first picks for a beginner who just wants their damn desktop experience now please.

  • NFS works great for media files and stuff but be careful and know what you are doing before you go put database storage on it.

  • OotL: What's the state of drama with Mr Mullenweg and WPEngine regarding this plugin? Wasn't there a hostile takeover and change of hands during last year? I tuned out at some point.Is this hacked plugin maintained by Matt folks, WPEngine folks or actually unrelated?

  • Chimera Linux is very interesting. Has anyone here tried running it?

  • My guess is some firmware or modules that just makes it that big and if you want room for snapshots you need to resize (or uninstall some variant if not needed). OS installer might have too small default size for a setup like this.

    300MBish for a kernel is totally normal and you have 3 variants installed.

  • One way to go about the network security aspect:

    Make a separate LAN(optionally: VLAN) for your internals of hosted services. Separate from the one you use to access internet and use with your main computer. At start this LAN will probably only have two machines (three if you bring the NAS into the picture separately from JF)

    • The server running Jellyfin. Not connected to your main network or internet.
    • A "bastion host" which has at least two network interfaces: One connected outwards and one inwards. This is not a router (no IP forwarding) and should be separate from your main router. This is the bridge. Here you can run (optional) VPN gateway, SSH server. And also an HTTP reverse proxy to expose Jellyfin to outside world. If you have things on the inside that need to reach out (like package updates) you can have an HTTP forward proxy for that.

    When it's just two machines you can connect them directly with LAN cable, when you have more you add a cheap network switch.

    If you don't have enough hardware to split machines up like this you can do similar things with VMs on one box but that's a lot of extra complexity for beginners and you probably have enough of new things to familiarize yourself with as it is. Separating physically instead of virtually is a lot simpler to understand and also more secure.

    I recommend firewalld for system firewall.

  • Here, you dropped this: /*

    BTW ncdu -x /boot

  • Bluetooth hacking is quite rare.

    I wouldn't be so sure.

    Watch this.

  • Partitioning in the Debian installer being half-broken is something nobody talks about but IME still a thing.

    What do is step through the installer to the point where you're at, ctrl+F* to get a shell, set it up manually using fdisk/mdadm/lvm/cryptsetup/mkfs, and then back again to rescan and just assign the mounts and filesystems

    I think I still have a half-written guide for just this in drafts somewhere actually. If you get stuck you can DM and maybe I dig something up

  • oversimplifying a bit:

    TLS (https) provides transport security that whatever is served by the mirror is really associated with the domain name for that mirror domain name. Each HTTPS response is signed live so the private key must be "hot" and loaded in memory on the mirror (or its reverse proxy).

    PGP signatures provides integrity and authentication that the package files themsvelves have been signed by the repo signing key. This signing can be done once per package and the private key can be offline.

    HTTPS is not a replacement for PGP sigs. They are for different things. HTTPS will provide a bit better privacy (and now that I think of it, theoretically some package manager could be vulnerable to downgrade mitm - substitute a package with a legit and signed but older vulnerable version - or other bugs).

    PGP on the other hand is such a mess that even some cryptographers don’t like it.

    I've seen plenty of critique on PGP for email encryption but that's not relevant here.

    sq (sequoia) is great alternative implementation you can use instead of GnuPG.

  • X.Org Server May Create A New Selective Git Branch With Hopes Of A New Release This Year

    Jump
  • I think it depends a lot on what you are building.

    For bigger projects and apps leveraging the mobile platform I'm 100% with you.

    These kinds of frameworks can still be a good fit for a quick MVP demo, as a stepping stone for porting an existing web app, or if all you really want is a glorified web view (or are PWAs enough for the last one these days?)

    Specifically RN is in terrible shape and IMO something to avoid though.

  • Everything in there is relevant and applies to flatpaks too. Being aware of the risks is important when using alternative distribution methods. With power, responsibility.

  • Tricking users into using Snap without realizing it, making them unknowingly vulnerable to exploits like this, would be really really bad and unethical on Canonical’s part.

    That is not what is happening at all.

    Just so nobody is confused or gets afraid of their install: Getting the Firefox snap installed via Ubuntus apt package does not make users vulnerable to what is talked about here and is just as safe as the apt package version. For Firefox snaps might even be safer since you will probably get security patches earlier than with apt upgrades and get some sandboxing. In both cases you are pulling signed binaries from Canonical servers.

    The post is about third-party fake snaps. If you run a snap install command from a random web site or LLM wkthout checking it, or making a typo, then you are at risk. If Ubuntu didnt have snaps, this would be malicious flatpaks. If Ubuntu didnt have flatpaks, it would be malicious PPAs. And so on. Whatever hosted resource gets widely popular and allows users to blindly run and install software from third-parties will be abused for malware, phishing, typosquatting and so on. This is not the fault of the host. You can have access to all the apps out there you may ever want or you can safely install all your apps from one trusted source. But it's an illusion that you can never have both.

    People have opinions about if snaps are a good idea or not and thats fine but there shouldnt be FUD. If you are using Canonicals official snaps and are happy with them you dont have to switch.

  • Cybersecurity @sh.itjust.works

    Malware peddlers are now hijacking Snap publisher domains

    blog.popey.com /2026/01/malware-purveyors-taking-over-published-snap-email-domains/
  • Linux @lemmy.world

    Rubenerd: Xfce is great

    rubenerd.com /xfce-is-great/
  • Linux @sh.itjust.works

    Keeping persistent history in bash

    eli.thegreenplace.net /2013/06/11/keeping-persistent-history-in-bash
  • Linux @programming.dev

    Keeping persistent history in bash

    eli.thegreenplace.net /2013/06/11/keeping-persistent-history-in-bash
  • Linux @sh.itjust.works

    Dealing with faulty RAM modules in 2026

    blog.kumio.org /posts/2026/01/memtest86plus.html
  • Linux @programming.dev

    Dealing with faulty RAM modules in 2026

    blog.kumio.org /posts/2026/01/memtest86plus.html
  • Selfhosted @lemmy.world

    Dealing with faulty RAM modules in 2026

    blog.kumio.org /posts/2026/01/memtest86plus.html
  • Linux @lemmy.ml

    Dealing with faulty RAM modules in 2026

    blog.kumio.org /posts/2026/01/memtest86plus.html