. . . until something in the stack requires a significant kernel upgrade, and then you're stuck.
- 帖子
- 0
- 评论
- 190
- 加入于
- 2 yr. ago
- 帖子
- 0
- 评论
- 190
- 加入于
- 2 yr. ago
What exactly is the point of stable release? I don't need everything pinned to specific versions—I'm not running a major corporate web service that needs a 99.9999% uptime guarantee—and Internet security is a moving target that requires constant updates.
Security and bug fixes—especially bug fixes, in my experience—are a good enough reason to go rolling-release even if you don't usually need bleeding-edge features in your software.
Take your time with the install process. It's possible that you may breeze through it. It's also possible that you may discover that, say, there's something wrong with the EFI implementation of the system you're installing to that you need to do some research to resolve. I've had both experiences.
Once installed, Gentoo is pretty much rock-solid, and almost any issue you have can be fixed if you're willing to put the effort in. Portage is a remarkably capable piece of software and it's worth learning about its more esoteric abilities, like automatic user patch application.
Do take the time to set up a binary package host. This will allow you to install precompiled versions of packages where you've kept the default USE flags. Do everything you possibly can to avoid changing the flags on webkit-gtk, because it is quite possibly the worst monster compile in the tree at the moment and will take hours even on a capable eight-core processor. (Seriously, it takes an order of magnitude more time than compiling the kernel does.)
Install the gentoolkit package—equery is a very useful command. If you find config file management with etc-update difficult to deal with, install and configure cfg-update—it's more friendly.
If you're not gung-ho about Free Software, setting
ACCEPT_LICENSE="* -@EULA"(which used to be the default up until a few years ago) in make.conf may make your life easier. Currently, the default is to accept only explicitly certified Free Software licenses (@FREE); the version I've given accepts everything except corporate EULAs. It's really a matter of taste and convenience.Lastly, it's often worthwhile to run major system upgrades overnight (make sure you
--pretendfirst to sort out any potential issues). If you do want to run updates while you're at the computer, reduce the value of-jand other relevant compiler and linker options to leave a core free—it'll slow down the compile a bit, but it'll also vastly improve your experience in using the computer.(I've been a happy Gentoo user for ~20 years.)
You get whatever drivers you checked off in the config. That might be only what you need for your machine, or you can build some extras, into the kernel or as modules (I've done
make modules_installseparate from updating the kernel more than once, because I needed support for a new peripheral). In order to boot the machine you only need a minimal set of drivers: CPU, video, keyboard (+ port), and hard drive. Anything else you can fix later if you need to.My experience in moving a system with a custom kernel from an Athlon64 to a Phenom II more than a decade ago was that the CPU, video, and keyboard were either the same for both or easy to figure out (CPU might have been a bit more difficult if I'd been switching between AMD and Intel, but not much), but I ended up building pretty much every possible hard drive controller driver directly into the kernel until I figured out which one the new board was using. The new system booted without issue, but I had to futz around a bit to get ALSA and other nonessentials back on track.
Gentoo—depends on your CFLAGS, specifically
-march. You may have to change it to a more generic setting and rebuild the system set, plus build additional drivers into your kernel if you have a custom one, before you can safely proceed with the move.In other words, you can get away without reinstalling, but it's a bit more involved because you may need to undo some customization first.
(Observing Gentoo user is puzzled by the notion of a package manager that can't handle downgrades correctly without third-party assistance.)
That page only lists browser engines it thinks are "notable", which is not the same as viable. Microsoft stopped developing its own engines when it moved Edge to Blink.
Currently there are four viable browser engines (still being developed and capable of displaying enough sites with enough accuracy to make a plausible daily driver) in two families: WebKit and its fork Blink, and Gecko and its fork Goanna. Goanna is not corporate. In addition, there are some experimental engines, like Ladybird's.
I won't deny that the situation is dire, but it isn't quite as bad as you've painted it. Yet.
I think part of what you're missing may be a set of very old assumptions about where the danger is coming from.
Linux was modeled after UNIX, and much of its core software was ported from other UNIX versions, or at least written in imitation of their utilities. UNIX was designed to be installed on large pre-Internet multi-user mainframe+dumb terminal systems in industry or post-secondary education. So there's an underlying assumption that a system is likely to have multiple human users, most of whom are not involved in maintaining the system, some of whom may be hostile to each other or to the owner of the system (think student pranks or disgruntled employees), and they all log in at once. Under those circumstances, users need to be protected from each other, and the system needs to be protected from malicious users. That's where the system of user and root passwords is coming from: it's trying to deal with an internal threat model, although separating some software into its own accounts also allows the system to be deployed against external threats. Over the years, other things have been layered on top of the base model, but if you scratch the paint off, you'll find it there underneath.
Windows, on the other hand, was built for PCs, and more or less assumes that only one user can be logged in to a machine at a time. Windows security is concerned almost entirely with external threats: viruses and other malware, remote access, etc. User-versus-user situations are a very minor concern. It's also a much more recent creation—Windows had essentially no security until the Internet had become well-established and Microsoft's poor early choices about macros and scripts came back to bite them on the buttocks.
So it isn't so much that one is more secure than the other as that they started with different threat models and come from different periods of computing history.
Well, Apple hardware is a bit of a footgun in general. It isn't in their best interests for people to repurpose their old hardware (they want you to buy shiny new hardware and make them money), so they don't exactly go out of their way to make it easy. Driver availability outside their software ecosystem simply doesn't enter into consideration when they're designing a product. Nor does the ability to swap out a component for something with a different chipset. They've been that way since the mid-1980s.
I think you're using obsolete closed-source drivers that require external patches for each new kernel version. So the situation is entirely Broadcom's fault.
From the Gentoo ebuild for broadcom-sta:
If you are stuck using this unmaintained driver (likely in a MacBook), you may be interested to know that a newer compatible wireless card is supported by the in-tree brcmfmac driver. It has a model number BCM943602CS and is for sale on the second hand market for less than 20 USD.
So would I. I once ran a kernel a couple of years in arrears for a while, with everything else up-to-date, and had no userspace issues whatsoever. Granted, this was not on a Macbook.
It looks like the simplest method is to use rEFInd + shim or PreLoader—see The rEFInd Boot Manager: Managing Secure Boot.
If you need a last-ditch option or want to get your hands dirty, the Gentoo wiki has recipes for enrolling new EFI keys by hand. This is a rather tedious procedure, but should work for any distro that comes with efitools, sbsigntools (for signing the kernel and modules, if not already signed), and openssl (for key generation).
Any halfway decent desktop email client will do the job—people have already listed several. I use claws-mail, but getting it to work with GMail involves the computer equivalent of doing a triple backflip through a hoop, so you may want to go with something more common.
Github is only used to mirror the main repo (which is on gitweb.gentoo.org). I assume that was done to attract driveby patches and reduce load from Portage git syncs on the Gentoo servers.
It broke a bash script that's going to be gone within a month. The continuous integration stack in Gentoo (which probably doesn't do quite what you think it does) is basically a stack of bash hacks that causes as many problems as it solves, so it's being retired. ( relevant gentoo-dev ML thread )
When I first installed Gentoo, it was because it was one of only around three distros that supported x86_64 at the time. Yes, that was a long time ago.
I've kept it as a daily driver for a number of reasons. First, because I'm a control freak, and Gentoo goes out of its way to allow me to select exactly the packages I want, and gives me access to all the knobs and switches that other distros may hide in the name of user-friendliness.
Second, because once installed it's surprisingly solid and trouble-free—Portage is an excellent (if slow) package manager that, judging from what I've heard from people running other distros, is better than the average at preventing breakage, and since it's rolling-release there are no whole-distro upgrades to complicate things. I ran one system on rolling updates for 17 years without reinstalling, and it was still pretty much up-to-date on all packages when I retired it back in March—try that with Ubuntu. (The replacement system also runs Gentoo.)
Thirdly, I've been with Gentoo for so long that I know how to create packages, unbork a system that I've messed up by doing something really stupid, and various other tricks. If I went to another distro, I'd have to relearn much of that from scratch.
(A fourth reason for some might be that it supports a wider range of CPU architectures than any other distro except possibly Debian.)
Writing a custom GTK3 theme for my own use a couple of years back was an extremely painful process. There's no list anywhere of the possible themable element types (I had to go through the actual source code to compile one) or the possible nonstandard options (never did manage to compile that list). I haven't had to look at GTK4 yet, but I doubt it's any better.
(As for how people use dark themes: put borders on things if you need to, and/or use hover options to distinguish what the active element is.)
The X3D CPUs are very sensitive to overvolting, and some mobo manufacturers had their boards set to overvolt out of the box before this was discovered. Result: fried CPUs.
There is only one X server implementation
That isn't quite true. There have been several proprietary implementations for non-Linux systems—Apple's XQuartz was still being maintained as of a couple of years ago, although I don't know about its current status. Standards documents exist, and anyone can code to them.
I'd just roll back the problem package to the last acceptable version until I have the time to address whatever the issue is (or block the new version of just that package if I have advance notification). That way, I get the fixes for everything else without breaking my workflow. If a rolling-release distro has a package manager that doesn't allow that, I'd contend that said package manager is broken.