Snapshots are great for solving accidental oopses, and for getting consistent backups that reflect a single point in time. But they aren't backups. If the HDD or SSD dies snapshots don't help. If the file system gets corrupted snapshots don't help. If the file data becomes corrupt snapshots don't help (since they only store a single copy of any version of a file).
So snapshots are no substitute for backups. I run btrfs and I do backups, and I sync those backups to a remote location (look up 3-2-1 backup).
I strongly disagree. I'm going to quote myself from reddit here:
Why would you expect to be able to use an old compiler and new crates? Shouldn't you just pin everything to the old versions? The MSRV aware resolver (that we had stably for a year now) makes that seamless. I don't see why they expect to be able to eat their cake and have it too.
This comes up again and again by LTS fans, be it safety critical or Debian packages. Yet no one have managed to explain why they can't use a new compiler but can use new crates? Their behaviour lacks consistency.
And from a later reply:
Now if they want to fund the maintenance in question, that is an entirely different question (and would be a net benefit for everyone). But that tends to be quite rare in open source. https://xkcd.com/2347/ very much applies.
I think some fixed-size collections and stuff like that would be super nice in core.
If you don't mind using a crate: take a look at the well regarded https://lib.rs/crates/heapless (but yeah, having it in core would be nice, but might be too niche).
It isn't open source, but DaVinci Resolve is available for Linux. With limited features if you don't pay. Might be overkill for what you do, and I understand it can be finicky to get it working (needs nvidia, poor support for AMD, very limited format support unless you get the paied version, ...).
I don't really do video stuff, but I did play around with it a few years ago, and it seemed very comprehensive.
I don't use Gnome, but they hate to expose settings in general it seems and like to dumb down everything (and that is why I don't use it). The issue here is that the you need KDE, Sway, Niri, Xfce, etc all to implement a setting for this. Middle mouse paste is useful and has been standard on Unix-likes for decades. There is literally no reason to remove it.
Yeah, the Flint 3 seems like a worse overall router when it comes to computational power and chipset. The only thing it has going for it is WiFi 7 (instead of 6) and 2.5 G ethernet on all ports. The Flint 3 is also more power hungry, which isn't great given the high energy costs in Europe.
Most people don't benefit from WiFi 7 (WiFi 6 is already good enough for almost everything) and if you want more than 2x 2.5G ports, consider getting a (managed) switch to extend the router with.
No, it was your assertion that a wheel being too fiddly. It seemed quite broad (stating it as an universal truth). It might be for you, but not for most people (but you wrote it as an unqualified statement).
While that works, it will use more electricity than an all-in-one ARM based router. Depending on prices and renewable/fossil mixture where you live, this may or may not be a concern.
GL.Inet products that use Mediatek chipsets are great since you can usually flash standard OpenWRT on them. I would avoid routers with different chipsets since they are unlikely to get proper support.
(Though I can't say that my MT-6000 is cheap, but it is an extremely capable router. That is top of the line though, they have cheaper stuff.)
Not true, if there is no user visible setting for it. Changing a hidden gsetting via a command line is essentially removing it since it will likely bitrot and then be fully removed in a few years.
For many people this is a non-issue. I think this a case of just accepting we are different and don't need to force our view on everyone else.
Maybe 15 years ago I had a mouse with a tilting scroll wheel (for side scrolling), on that one I did have issues with middle click, for about a month until I got used to clicking straight down.
So maybe it is just a question of practice? Maybe not. But since both options exist there is no need to get upset.
V2 is about Nehalem. V3 is approximately Haswell (iirc it corresponds to some least common denominator of AMD and Intel from around that time). V4 needs AVX512 (that is really the only difference in enabled instructions compared to V3).
Both my daily driver computers can do v3, but not v4. (I like retro computing, so I also have far older computers that can't even do 64-bit at all, but I don't run modern software on those for the most part.)
I think a lot of modern software is bloated. I remember when GUI programs used to fit on a floppy or two. Nowdays we have bloated electron programs taking hundreds of MB of RAM just to show a simple text editor, because it drags a whole browser with it.
I love snappy software, and while I don't think we need to go back to programs fitting on a single floppy and using hundreds of KB of RAM, the pendulum does need to swing back a fair bit. I rewrote some CLI programs in the last few years that I found slow (one my own previously written in Python, the other written in C++ but not properly designed for speed). I used Rust, which sure helped compared to Python, but the real key was thinking carefully about the data structures used up front and designing for performance. And lots of profiling and benchmarking as I went along.
The results? The python program was sped up by 50x, the C++ program by 320x. In both cases it changed these from "irritating delay" to "functionally instant for human perception".
And I also rewrote a program I used to manage Arch Linux configs (written in bash) in Rust. I also added features I wanted so it was never directly comparable (and I don't have numbers), but it made "apply configs to system" take seconds instead of minutes, with several additional features as well. (https://github.com/VorpalBlade/paketkoll/tree/main/crates/konfigkoll)
Oh and want a faster way to check file integrity vs the package manager on your Linux distro? Did that too.
Now what was the point I was making again? Maybe I'm just sensitive to slow software. I disable all animations in GUIs after all, all those milliseconds of waiting adds up over the years. Computers are amazingly fast these days, we shouldn't make them slower than they have to be. So I think far more software should count as performance critical. Anything a human has to wait for should be.
Faster software is more efficient as well, using less electricity, making your phone/laptop battery last longer (since the CPU can go back to sleep sooner). And saves you money in the cloud. Imagine if you could save 30-50% on your cloud bill by renting fewer resources? Over the last few years I have seen multiple reports of this happening when companies rewrite in Rust (C++ would also do this, but why would you want to move to C++ these days?). And hyperscalers save millions in electricity by optimising their logging library by just a few percent.
Most modern software on modern CPUs is bottlenecked on memory bandwidth, so it makes sense to spend effort on data representation. Sure start with some basic profiling to find obvious stupid things (all non-trivial software that hasn't been optimised has stupid things), but once you exhausted that, you need to look at memory layout.
(My dayjob involves hard realtime embedded software. No, I swear that is unrelated to this.)
As far as I know they do a few things (but it is hard to find a comprehensive list), including build packages for newer microarchitectures such as the aforementioned x86-64-v3. The default on x86-64 Linux is still to build programs that work on the original AMD Athlon 64 from the early 2000s. That really doesn't make sense any more, and v3 is a good default that still covers the last several years of CPUs.
There are many interesting added instructions and for some programs it can make a large difference, but that will vary wildly from program to program. Phoronix has also done some benchmarks of Arch vs Cachy, and since Phoronix Test Suit mostly uses it's own binaries, what that shows is the difference that the kernel, glibc and system tuning alone makes. And those results do look promising.
I don’t want to spill some memes worth Arch elitism here, but I just doubt Arch derivatives crowd knows what x86-64-v3 thing is. Truth be told, I barely understand that myself.
I think you just did show a lot of elitism and arrogance there. I expect software developers working on any distro to know about this, but not necessarily the users of said distros. (For me, knowing about low level optimisation is part of my dayjob.)
Also, for Cachy in particular they do seem to have some decent developers. One of their devs is the guy who maintains the legacy nvidia drivers on AUR, which involves a fair bit of kernel programming to adapt to changes in new kernel releases (nvidia themselves no longer do so after the first year of drivers becoming legacy).
On paper they are efficient. In practise, all pointer based data structures (linked lists, binary trees, etc) are slow on modern hardware. And this effect is more important than the complexity in practise for most practical high performance code.
You are far better off with linear access where possible (e.g. vectors, open addressing hash maps) or if you must have a tree, make the fan-out factor as large as possible (e.g. btrees rather than binary trees).
Now, I don't know if Haskell etc affords you such control, I mainly code in Rust (and C++ in the past).
XOR lists are obscure and cursed but cool. And not useful on modern hardware as the CPU can't predict access patterns. They date from a time when every byte of memory counted and CPUs didn't have pipelines.
(In general, all linked lists or trees are terrible for performance on modern CPUs. Prefer vectors or btrees with large fanout factors. There are some niche use cases still for linked lists in for example kernels, but unless you know exactly what you are doing you shouldn't use linked data structures.)
I would go back to your CAD model and tweak it for better printability. If it was a model you downloaded and without a source CAD model I would just remodel it myself to be more printable.
Snapshots are great for solving accidental oopses, and for getting consistent backups that reflect a single point in time. But they aren't backups. If the HDD or SSD dies snapshots don't help. If the file system gets corrupted snapshots don't help. If the file data becomes corrupt snapshots don't help (since they only store a single copy of any version of a file).
So snapshots are no substitute for backups. I run btrfs and I do backups, and I sync those backups to a remote location (look up 3-2-1 backup).