I routinely do 1-4TB images of SSDs before making major changes to the disk. Run fstrim on all partitions and pipe dd output through zstd before writing to disk and they shrink to actually used size or a bit smaller. Largest ever backup was probably ~20T cloned from one array to another over 40/56GbE, the deltas after that were tiny by comparison.
Cold War 1 never ended, the west got a kick out of telling everyone that they won and took their eye off the ball while Russia continued playing.
There are more Russian agents in the US now than ever before and I guarantee that Europe is getting the same treatment. Small amounts of spending can yield outsized results for Russia, they learned this 50+y ago and are continuing to act on it.
Because you're relying on compatibility between older Debian software (systemd, etc) and newer versions installed in the chroot. Things get weird quickly.
Consider a nested privileged container instead (LXC or similar) and cross your fingers that Debian systemd and Arch systemd play nice.
If the above fails just make a VM and pass through the GPU with GVT-g (otherwise pass through the entire GPU.)
If all of that fails install Arch to a USB attached SSD or something.
If you're using an Intel chip look into GVT-g and consider running Arch from a VM, that'll be the closest thing to native.
The unfortunate thing about running an Arch container from a Debian host is that you're relying on an older kernel and an older systemd host side and I've found that often causes compatibility problems inside the Arch container. If you are very, very lucky Arch will just work inside the container, but IME that's fairly rare as systemd often has breaking changes over several releases (and Arch tends to be at least several releases ahead of Debian.)
Congress should be enrolled in the base tier Medicare program and barred from buying any private insurance on top. Health insurance reform would follow within 6mo.
99% of breakages on a rolling release distro are solved by downgrading the broken package until a fix lands so it's not much worse than windows. You want to be chasing recent releases of pretty much everything if you want the best performance for gaming. You could run Debian but you'll be waiting 18mo for any new performance improvements to land.
Install the ProtonPlus flatpak if you need custom proton versions for some games, I usually just add the latest proton-ge and don't have to bother with anything else
Fedora, Arch, EndeavourOS, Nobara and Bazzite are all pretty good bases for a gaming setup. They all have their pain points so I'd boot a couple and see how you like them before making a decision.
Many US food companies won't ship delicate foods during the hot parts of the year. I have a bunch of my staple groceries shipped to me (go food deserts!) and I can't get chocolate or other heat sensitive products from them between late May and mid September.
Buy external drives. Don't run them in RAID, use one to store backups and plug it in once or twice a week to copy data to it.
The secret to RAID is that it doesn't buy you data protection, it buys you uptime to access data while a device in the array is failed. This is most valuable to businesses that can't afford the downtime that recovery from a backup incurs. The most paranoid RAID will still fail sooner or later, due to hardware or software failure, and as a home user with a limited budget you're far better off having one offline backup that you can use to recover data from once that happens.
Backup only data you can't afford to lose (eg: don't backup downloaded data that can be replaced easily, like a game or movie collection) and your backups will be much more manageably sized and you won't need to spend as much on your backup drive. If a backup disk is too much for your budget you can always exploit cloud backup plans, backblaze PC backup has no limit on the size of your backups and only charges something like ~$60/yr.
Edit: It's also worth thinking about what kind of data you're storing and splitting that data across multiple devices if possible. If you're storing bulk data where performance isn't critical, like backups from other machines or a movie collection, you can pay a much lower price by buying a hard drive instead of flash. Even if only some of your data requires fast flash you can still use a cheaper HDD to store bulk data and buy a smaller flash drive for performance sensitive tasks. When I build NAS I split my data two pools, one bulk pool of HDDs and one much smaller fast pool comprised of flash storage. Put performance critical data on flash, put bulk storage on HDDs, this will allow you to spend less on bulk and still have fast storage performance for tasks that require it. A 512GB or 1TB SSD alongside a 4TB, 6TB or 8TB HDD is significantly cheaper than spending on a 4TB or 8TB SSD.
Shop eBay for refurbished storage, it'll be significantly cheaper than spending on brand new drives.
It's pretty good at proving digital chain of custody. You could, for example, handle public records on a block chain.
I've been hoping for a game platform that tokenizes game licenses so that we can sell or gift them to others when we're done with them - basically steam but you own your copy of the game and can sell it on. This is incredibly unlikely to happen though, a secondary market for digital licenses would eviscerate profits.
I routinely do 1-4TB images of SSDs before making major changes to the disk. Run fstrim on all partitions and pipe dd output through zstd before writing to disk and they shrink to actually used size or a bit smaller. Largest ever backup was probably ~20T cloned from one array to another over 40/56GbE, the deltas after that were tiny by comparison.