
Its a double win for them. More money for their war machine and disadvantage the undesirables. Because they are fucking fascist Nazis.
I don’t understand why they are still in power.

Its a double win for them. More money for their war machine and disadvantage the undesirables. Because they are fucking fascist Nazis.
I don’t understand why they are still in power.


Depends on which classification system you use. Botaically it is a fruit. But culinarily it is a vegetable.
I think it forgot the wheels. Those are just wheel shaped stands. No tires at all. The first image looks like it might just have shit clearance for the road but the later ones don’t have any tires at all…


I would want a smaller device and larger screen. Aka reduce the bezels. The deck has comically large bezels.
Also, the size of the device does not have much to do with its weight. I would rather a larger comfitable device then a tiny one that gives me cramps to use. Lighter weight is always a plus. But not always worth the tradeoffs. I would not want a lightweight device that only lasts 30mins of usage.


Yeah… Ubuntu packages are never up to date on release date. They freeze them months before so they can iron out any bugs with the versions they picked. You don’t pick Ubuntu or any point release distro to get up to date packages.


If you can drop a .env file on a server. You can drop a well formed .ini file instead. I don’t see any reason to ever parse a env file as a ini file.


The ownership model is Rust’s core innovation. Every heap allocation has exactly one owner — the variable binding that “holds” it. Ownership can be moved to another binding, at which point the original is invalidated. It can never be silently copied (unless the type implements Copy).
While each of these is not wrong in isolation, together they are. If we are talking about data stored on the heap that last bit is not true. Types that hold a raw pointer cannot be made Copy. Only simple types can be made Copy, ones that don’t own any non direct data and as such can be stored on the stack and simply memcpyed to get a copy.


The problem is - users pay for Windows only once
That is not in the slightest true. They pay once per computer. And people go through multiple computers in their lifetime. So it is not at all tied to birthrate.
Very few people buy licenses directly. Most people buy it pre-installed with an OEM license that is tied to that computer.
LSF is not a distro. It is a instruction manual and teaching aid. Don’t use it as a base for you main OS. And IMO Gentoo does not really teach you more then Arch does. It gives a bit of flexibility that not many care about (how things are compiled) at a very big cost (of having to compile everything yourself). I would not use either unless compiling things is your hobby.
Sure, try them in a VM if you really want to. But I would not really consider that moving on from your current distro nor do you really need to do that.


I mean all entertainment is really there to kill time. Thats really the main point of having fun. I would say things are worse then that. They (at least tripple-A games) are designed to extract as much cash as they can and get you addicted.


GTK does not mean libadwaita. That is a addition on top of GTK. Many of the GTK programs and environments don’t use libadwaita. It is mostly only the GNOME stuff that does.


From anyone that has the pkgstat package installed:
https://pkgstats.archlinux.de/fun/Desktop Environments/current


Debian has two main versions, stable - which is released every two years and supported for a long time. And unstable which is basically a rolling release and constantly changes adopting things to test them before the next stable release. There is also testing, but that is just to place thing in before promoting them to stable so has the same release cadence as stable.
Two years of fixed versions on a desktop is a very long time to be stuck on some packages - epically ones you use regularly. Most people want to use things that are newer then that, either new applications released or new features for apps they use in the past two years.
Ubuntu also has two release versions (that not really the right term though). They have a LTS version which is released every two years much like Debian is. But they also have a interim release that is released every 6 months. This gives users access to a lot newer versions of software and stuff that has been released more recently. Note that the LTS versions are just the same as the interim versions, its just that LTS versions are supported for a longer period of time, so you can use it for longer.
For the Ubuntu releases they basically take a snapshot of the Debian unstable version, and from that point on they maintain their own security patches for the versions they picked. They can share some of this work with Debians patches and backports, but since Debian stable and Ubuntu are based off different versions Ubuntu still needs to do a lot of work with figuring out which ones they need to apply to their stuff as well as ensuring things work on the versions they picked. Both distros do a lot of work in this regard and do work with each other where it makes sense.
Ubuntu also adds a few things on top of Debian. Some extra packages, does a few things that make the disto a bit more user friendly etc.
Any other distro that wants to base off one of these has to make the choice
For a lot of distro maintainers basing off Ubuntu gives them a newer set of packages to work with while doing a lot less work doing all that work themselves. Then they can focus on the value adds they want to add ontop of the distro rather then redoing the work Ubuntu already does or sticking with much older versions.
The value add work that needs to be done on either base I dont think is hugely different. You can take the core packages you want and change a few settings, or remake a few meta packages that you dont want from Ubuntu. This is really all stuff you will be doing which ever one you pick. It is a lot more work keeping up with security patching everything.
The query language is deliberately less expressive than jq’s. jsongrep is a search tool, not a transformation tool-- it finds values but doesn’t compute new ones. There are no filters, no arithmetic, no string interpolation.
This does make it distinctively less useful. I find quite a lot of the time I need filtering or transformations when doing complex stuff. Which when dealing with larger documents is becomes almost always. So the main benefit, it’s speed, does not really matter if I cannot use it for a task. And if the only tasks I can use it on are simpler ones then I don’t need its speed and it is not worth the effort to learn or use.
TBH I don’t use jq much these days either. I switched to nushell a while back and it has native support for everything jq (and so this tool) can do. But I find it hard more intuitive to use. Every time I touch jq for anything more then just lookups I need to reread the docs to remember what the syntax is. In a much shorter time with nushell I don’t need to do that anywhere near as often. Plus it works with yaml, toml and most formats.
I am not sure that is fully true. Or at least not fully explained. The Steamdeck has a full KDE environment installed and it uses this when in desktop mode. But steam is not running in big picture mode in front of this.
KDE is not running at all when in the game mode of the Steamdeck. In that mode it uses a compositor written by valve called gamescope. Switching between these is effectively logging out and back in again to switch the compositor.
Also it now has a way to run the desktop as a nested session in game mode but that is winning kwin inside gamescope.
You cannot eliminate X11/wayland overhead. You need a display manager of some sort. I suspect most games/proton will require X11 or at least xwayland and a wayland compositor. You probably do want to use a window manager of some sort as well or you do lose out on a lot of controls like window placement and sizing. Some games might do weird things if they dont directly launch in full screen mode. And steam itself would probably want to be run in big picture mode to make it go full screen. If you want something designed for gaming then you might try gamescope which is what the steamdeck uses as its window manager in the game mode.
There are probably other areas with a higher impact that you can optimize more before really worrying about a lack of window manager though.

That just makes your writes to the disk more efficient because of block alignment and caching nonsense.
This is not true. The reason to use dd is to be able to write a fixed amount from any location in the source to any location in the destination. You have lots of control how this happens. But the way everyone uses it, to write a whole file to another whole file it offers no benefit. If anything you have to tune the prams to get decent performance out of it. Any other copy tool uses a better block size by default and so all you can do is match the performance of other copy commands like bash redirection and cp.
Brakes and steering do not need to be.
parse_oui_database takes in a file path as a &String that is used to open the file in a parsing function. IMO there are a number of problems here.
First, you should almost never take in a &String as a function argument. This basically means you have a reference to a owned object. It forces an allocation of everything passed into the function only to take a reference of it. It excludes types that are simply &strs forcing the caller to convert them to a full String - which involves an allocation. The function should just taking in a &str as it is cheap to convert a String to a &str (cheaper to use than a &String as well as &String have a double redirection).
Sometimes it might be even better might be to take in a impl<AsRef(str)> which means the function can take in anything that can be converted into a &str without the caller needing to do it directly. Though on larger functions like this that might not always be the best idea as it makes it generic and so will be monomorphised for every type you pass into it. This can bloat a binary if you do it on lots of large functions with lots of different input types. You can also get the best of both worlds with a generic wrapper function to a concrete implementation - so the large function has a concrete type &str and a warpper that takes a impl <AsRef<str>> and calls the inner function. Though in this case it is probably easier to just take in a &str and manually convert at all the one call sites.
Second. String/&str are not the write types for paths. Those would be PathBuf and &Path which both work like String and &str (so all the above applies to these as well). These are generally better to use as paths in most OSs dont have to be unicode. Which means there are files (though very rarely) which cannot be represented as a String. This is why File::open takes in a AsRef<Path> which your function can also.
Lastly. I would not conflate opening a file with parsing it. These should be two different functions. This makes the code a bit more flexible - you can get the file to parse from other sources. One big advantage to this would be for testing where you can just have the test data as strings in the test. It also makes the returned error type simpler as one function can deal with io errors and the other with parsing errors. And in this case you can just parse the file directly from the internet request rather than saving it to a file first (though there are other reasons you may or may not want to do this).
I have switched to using helix, so no matter which distro I am on I need to change it to be my default by setting the EDITOR env var.