Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)S
Posts
0
Comments
45
Joined
3 yr. ago

  • As far as I understand, in this case opaque binary test data was gradually added to the repository. Also the built binaries did not correspond 1:1 with the code in the repo due to some buildchain reasons. Stuff like this makes it difficult to spot deliberately placed bugs or backdors.

    I think some measures can be:

    • establish reproducible builds in CI/CD pipelines
    • ban opaque data from the repository. I read some people expressing justification for this test-data being opaque, but that is nonsense. There's no reason why you couldn't compress+decompress a lengthy creative commons text, or for binary data encrypt that text with a public password, or use a sequence from a pseudo random number generator with a known seed, or a past compiled binary of this very software, or ... or ... or ...
    • establish technologies that make it hard to place integer overflows or deliberately miss array ends. That would make it a lot harder to plant a misbehavement in the code without it being so obvious that others note easily. Rust, Linters, Valgrind etc. would be useful things for that.

    So I think from a technical perspective there are ways to at least give attackers a hard time when trying to place covert backdoors. The larger problem is likely who does the work, because scalability is just such a hard problem with open source. Ultimately I think we need to come together globally and bear this work with many shoulders. For example the "prossimo" project by the Internet Security Research Group (the organisation behind Let's Encrypt) is working on bringing memory safety to critical projects: https://www.memorysafety.org/ I also sincerely hope the german Sovereign Tech Fund ( https://www.sovereigntechfund.de/ ) takes this incident as a new angle to the outstanding work they're doing. And ultimately, we need many more such organisations and initiatives from both private companies as well as the public sector to protect the technology that runs our societies together.

  • Well you must have either set up a port redirect (ipv4) or opened the port for external traffic (ipv6) yourself. It is not reachable by default as home routers put a NAT between the internet and your devices, or in the case of ipv6 they block any requests. So (unless you have a very exotic and unsafe router) just uhhh don't 😅 To serve websites it is enough to open 443 for https, and possibly 80 for http if you want to serve an automatic redirect to https.

  • A colleague of mine had a (non externally reachable) raspberry pi with default credentials being hijacked for a botnet by a infected windows computer in the home network. I guess you'll always have people come over with their devices you do not know the security condition of. So I've started to consider the home network insecure too, and one of the things I want to set up is an internal ssh honeypot with notifications, so that I get informed about devices trying to hijack others. So for this purpose that tool seems a possibilty, hopefully it is possible to set up some monitoring and notification via uptime kuma.

  • Ah thank you, that wasn't obvious to me from its website

  • Why do you prefer it over syncthing?

  • You do not want Octoprint on a machine that is busy. Otherwise you have load spikes that cause Octoprint to not be able to send the move-commands (gcode) as fast as the printer executes the movements. This problem is pronounced with faster printers and slicers that break up arcs into small straight lines (which is practically all slicers). Otherwise your printer stutters because it has to take small breaks to wait for the next command from octoprint.

  • What privacy concerns do you have? I'm all for privacy, but I don't really see where registrars are a delicate topic in that. The most that comes to mind is that some (most?) have a service where they do not give out your name and address for whois requests, but instead the details of the registrar (namecheap has that for example).

  • True words. The sustained effort to keep something in decent shape over years is not to be underestimated. Now when life changes and one is not able or willing anymore to invest that amount of time, ill-timed issues can become quite the burden. At one point I decided to cut down on that by doing a better founded setup, that does backup with easy rollback automatically, and updates semi-automatically. I rely on my server(s), and all from having this idea to having it decently implemented took me a number of months. Just because time for such activities is limited, and getting a complex and intertwined system like this reliably and fault tolerant automated and monitored is simply something else than spinning up a one off service

  • And they believe all employees actually remember so many wildly different and long passwords, and change them regularly to wildly different ones? All this leads to is a single password that barely makes it over the minimum requirements, and a suffix for the stage (like 1 for boot, 2 for bitlocker etc), and then another suffix for the month they changed it. All of that then on sticky notes on the screen.

  • I ordered some parts from them a couple weeks ago to build my own custom laptop, and they're finally on their way and I'm super excited! The article is missing this, but you can order hinges, keyboard (with or without case), trackball/-pad and all these things individually from them, and use them for your own purposes.

    It is just mind boggeling how much MNT encourages hacking with their stuff. They even went and made a dedicated logo you can put on things that are made to work with the reform ecosystem / derivatives: https://source.mnt.re/reform/reform/-/blob/master/symbol-for-derived-works/mnt-based-reform.svg

    You can also search for the founder Lukas F. Hartmann and find a couple interviews out there.

  • Since you run everything in docker, I guess you have experienced the benefits of containerization. So why not leverage that for your host too?

    Fedora IoT is a container-based host that runs on your hardware, with a focus on edge device deployment.

    https://fedoraproject.org/iot/ I have it running on two servers as well, and it works great. The only thing I changed is that I layered docker on it instead of using podman, because at the time I had trouble getting my reverse proxy working properly over ipv6

  • I don't get your second paragraph. There are many markdown editors, and you can use their inbuilt methods or pandoc to convert that to epub/pdf/whatever. What features are missing from those editors?

  • Those are symptoms of sitting at that operation point permanently, and they are a of course a concern. What I'm after is that people think that energy gets put in to the battery, i.e. it gets charged, as long as a "charger" is connected to the device (hence terms like "overcharged"). But that is not true, because what is commonly referred to as "charger" is no charger. It is just a power supply and has literally zero say in if, how and when the battery gets charged. It only gets charged if the charge controller in the device decides to do that now, and if the protection circuit allows it. And that is designed to only happen if the battery is not full. When it is full, nothing more happens, no currents flow in+out of the battery anymore. There's no damage due to being charged all the time, because no device keeps on pumping energy into the cell if it is full.

    There is however damage from sitting (!) at 100% charge with medium to high heat. That happens indipendently from a power supply being connected to the device or not. You can just as well damage your cells by charging them to 100% and storing them in a warm place while topping them of once in a while. This is why you want to have them at lower room temperature and at ~60%, no matter if a device/"charger" is connected or not.

    (Of course keeping a battery at 60% all the time defeats the purpose of the battery. So just try to keep it cool, charged to >20% and <80% most of the time, and you're fine)

  • "overcharging" doesn't exist. There are two circuits preventing the battery from being charged beyond 100%: the usual battery controller, and normally another protection circuit in the battery cell. Sitting at 100% and being warm all the time is enough for a significant hit on the cell's longetivity though. An easy measure that is possible on many laptops (like thinkpads) is to set a threshold where to stop charging at. Ideal for longetivity is around 60%. Also ensure good cooling.

    Sorry for being pedantic, but as an electricial engineer it annoys me that there's more wrong information about li-po/-ion batteries, chargers and even usb wall warts and usb power delivery than there's correct information.

  • It is not that easy to understand what you want, to me it reads like you want something like Nextcloud - i.e. your own little cloud, where you can put all your stuff, and view it through the webbrowser or the nextcloud apps, and also keep selected parts of your stuff in sync with your devices (or automatically upload photos take with your smartphone for example).

    Backup of Nextcloud (or whatever you want to use) is a seperate topic. Any incremental backup tool would apply though, so there's much to choose from. I personally use btrbk which uses Btrfs Send+Receive to push incremental snapshots to an offsite server.

  • Partly yes, but just installing a package without running into conflicts does not yet guarantee a working system. You have to cater for the right configurations too, for example when you think about a corporate setting with all kinds of networking whoes (like shares, vpns and such). I think you could get this to work with Nix somehow, but you want to test these things beforehand, and if you do so using images then you have the thing to ship to machines in your hands already, there's no need to compose the OS and configurations over and over again for every machine.

    Another aspect with non-atomic OS composition on the target is that you have to deal with the transient phase from one state to the next. In this phase all kinds of things could happen, for example an update of nvidia drivers would render cuda disfunctional until the next reboot, as the userspace and kernelspace parts do not fit together anymore. With something like any of the fedora atomic variants, transient phases with basically undefined behaviour do not exist, and the time the system is not guaranteed to be in working order gets reduced to just the reboot.

    Nix is cool and definetely better than any traditional package manager. But it is not an ultimate solution, to be honest so far it seems to me like it is living in a nieche of enthusiasts that are smart enough to put up with its unique declaration language. And below that niche you have ordinary linux users that may just be happy with silverblue without any modifications, and above that niche you have corporate doing their own images in CI/CD, CoreOS and all that jazz.

  • that doesn't require I keep a full local copy of all the data

    If you don't do that, the place that you call "backup" is the only place where it is stored - that is not a Backup. A backup is an additional place where it is stored, for the case when your primary storage gets destroyed.

  • /dev/fb is mostly one thing: deprecated. Also it is not really a interface of your graphics card, it is a legacy way kindly still provided for pushing fullscreen pixels to your monitor in an unaccelerated fashion for things that have not made it to kms drm (which at this point is pretty much merely the console emulation on the TTYs). It is not an interface to the graphics card, because it doesn't provide any capabilities a graphics card has (like shaders etc). In fact for just pushing pixels you can leave any graphics card completely out of your computer if you connect your screen by other means (think stuff like SPI which is common in embedded devices; you can find many examples of such drivers in the kernel source at drivers/gpu/drm/tiny ).

  • Well maybe you youself are too new to recognize some of the appeals ;)

    One large advantage with silverblue is, that the whole composition of the OS does not take place on the target machine. That means that all the issues that could arise will not take place on the target machine, and can be dealt with beforehand. In the simple case this could mean just enjoying vanilla silverblue without having to think about possibly borking the machine. In an advanced usecase this could mean for example building the os images in a GitLab CI/CD pipeline (with well working tooling that exists already for docker etc), then having automatic tests in the pipeline ensure that everything important works as expected. And only if the tests pass, the image will be added to the repositorie's image registry, where the target machines will fetch it from automatically and rebase to it.