Skip Navigation

Posts
0
Comments
376
Joined
2 yr. ago

made you look

  • Compared to e.g. pushing a button in VS code and having your browser pop up with a pre-filled in github PR page? It's clunky, but that doesn't mean it's not useful.

    For starters it's entirely decentralised, a single email address is all you need to commit to anything, regardless of where and how it's hosted. There was actually an article on lobsters recently that I thought was quite neat, how the combination of a patch-based workflow and email allows for entirely offline development, something that's simply not possible with things like github or codeberg.

    https://ploum.net/2026-01-31-offline-git-send-email.html

    The fact that you can "send" an email without actually sending it means you can queue the patch submissions up offline and then send them whenever you're ready, along with downloading the replies.

  • Sourcehut uses it, it's actually the only way to interact with repos hosted on it.

    It definitely feels outdated, yet it's also how git is designed to work well with. Like git makes it really easy to re-write commit history, while also warning you not to force push re-written history to a public repo (Like e.g. a PR), that's because none of that is an issue with the email workflow, where each email is always an entirely isolated new commit.

  • It definitely looks like it's going to be a standard USB HID type device, if their SDL support is anything to go by.

  • Windows is pretty much the same as Linux, it exposes the raw events from the device and it's up to the app to handle them. Pretty sure the overlay handles that by sitting between the OS and the game and e.g. translating everything to Xbox style controls if the game needs it (And getting out of the way if it doesn't)

    Outside of that, well Valve added support for the controller to SDL, so anything using it will be fully supported. But then the game needs to actually be using a new enough version of SDL, otherwise it'll just see a generic controller device, and that can be hit or miss.

  • SpaceX wrote in its July permit application — under the header Specific Testing Requirements — Table 2 for Outfall: 001 — that its mercury concentration at one outfall location was 113 micrograms per liter. Water quality criteria in the state calls for levels no higher than 2.1 micrograms per liter for acute aquatic toxicity and much lower levels for human health

    Cool, you can drink the mercury water, but I'll pass thanks.

  • I've got some numbers, took longer than I'd have liked because of ISP issues. Each period is about a day, give or take.

    With the default TTL, my unbound server saw 54,087 total requests, 17,022 got a cache hit, 37,065 a cache miss. So a 31.5% cache hit rate.

    With clamping it saw 56,258 requests, 30,761 were hits, 25,497 misses. A 54.7% cache hit rate.

    And the important thing, and the most "unscientific", I didn't encounter any issues with stale DNS results. In that everything still seemed to work and I didn't get random error pages while browsing or such.

    I'm kinda surprised the total query counts were so close, I would have assumed a longer TTL would also cause clients to cache results for longer, making less requests (Though e.g. Firefox actually caps TTL to 600 seconds or so). My working idea is that for things like e.g. YouTube video, instead of using static hostnames and rotating out IPs, they're doing the opposite and keeping the addresses fixed but changing the domain names, effectively cache-busting DNS.

  • It's been a few years since I used a Mac, but even then resource forks weren't something you'd see outside of really old apps or some strange legacy use case, everything just used extended attributes or "sidecar" files (e.g. .DS_Store files in the case of Finder)

    Unlike Windows or Linux, macOS takes care to preserve xattrs when transferring the files, e.g. their archiver tool automatically converts them to sidecar AppleDouble files and stores them in a __MACOS folder alongside the base file in the archive, and reapplies them on extraction.

    If course nothing else does that, so if you've extracted a zip file or whatever and found that folder afterwards, that's what you're looking at.

  • I definitely agree, it just makes it a more precarious position to be in.

  • Set that minimum TTL to something between 40 minutes (2400 seconds) and 1 hour; this is a perfectly reasonable range.

    Sounds good, let's give that a try and see what breaks.

  • Because of static linking, a single GPL dependency turns the entire resulting binary into a GPL licensed one, so yeah just use something like the MPL in that case (Or EUPL, which I hear is similar)

    LGPL has the same issue, since it only provides an exception for dynamic linking. But honestly that's all an issue for lawyers and judges to sort out (I bet you could win in court with an argument that dynamically linking to GPL is actually fine).

  • What's the risk here though, a company like Amazon makes a closed source version of it?

    If it was a file format library, or something like a web server I'd get it. But stuff like cp are effectively just userspace wrappers around kernel APIs.

  • Or you'll create something that is genuinely better with good longevity and then discover you'll have next to no sales growth since once somebody buys it, they never need to replace it.

  • The latest Nvidia drivers have broken composition in Xfce, so I've been raw-dogging basic X11. It's like I'm using WinXP again.

  • "Enhance Your Calm" is official as well, it's a HTTP/2 error code.

    Pretty sure it's primarily a Demolition Man reference.

  • I like the idea that it's hard to boil water, but easy to find a person whose body temperature is exactly the same as the reference point.

  • Boxes doesn't seem to expose it unfortunately (Par for the course, being a Gnome app). virt-manager seems like a better option in that case, you can share an entire drive from the host to the VM, or if the hardware allows it the SATA controller itself and let the VM manage the entire thing.

    Edit: https://www.reddit.com/r/kvm/comments/klpyg2/how_can_i_use_my_windows_hard_disk/

    The only VM stuff I'm actually running is Proxmox, and while it all uses the same underlying kernel VM stuff, the UI is entirely different. In my case I've got my router running as a VM, and I'm handing off the network adapter itself to the VM, it's entirely unusable by the host OS. So while I know the functionality is there, the specific software side I've got no experience with.

  • That's just VirtualBox, I had the same issues on Windows because it has its own VM module that isn't compatible with anything built into modern OSs.

    Tried Boxes?

  • I went out of my way to see this during one of their open days, it's as interesting in person as you'd expect.

  • The idea is that it's left up to the windowing toolkit itself (.e.g GTK or Qt, etc.), so the compositor can focus on just compositing, which makes sense IMO as it's how other platforms handle it (Except they have a single OS provided windowing implementation). Problem is, that leads to massive fragmentation of functionality, every app has different toolbars and features based on the toolkit they use, and requires each app to handle it, which sucks and shouldn't be the case.

    Like in the Factorio case, it uses SDL for windowing, and SDL actually supports handling titlebars itself. But Factorio just wasn't including the dependency that enabled it at that point, so all it took to fix it was including it and everything started working. But that's still extra work that had to be done just to get minimum functionality, which wasn't needed on e.g. KDE.

    I mentioned in my other response, it's the inflexibility that's the actual problem. Lots of apps do want CSD, or at least control over how their windows are presented, but Gnome going "you're on your own" is the worst outcome.