Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)C
Posts
0
Comments
153
Joined
3 yr. ago

  • I'd like to politely disagree

    Finding alternatives to large software packages is great, don't think I'm not saying that - but any time you have competitor X and competitor Y, be they both commercial, both F/OSS, or some combination thereof, the competitors must be cognizant of each other when setting up features.

    Burying your head in the sand and ignoring Microsoft, Apple, and Google is a very solidly Microsoft-Apple-Google-style play. It's the play of someone who believes the other side offers no competition. That's how you get unwieldy features these tech giants implement because they know they can make a 70% effort and people won't be annoyed enough to leave.

    Every tool they make has a reason someone made it. Many tools are very important - for one example, the Microsoft Office document format is considered to be almost a universal format in presentations, spreadsheets, and plain documents for message passing between businesses.

    But as we as a society design alternatives to those various monopolies (as we should), we need users to want to use the new thing. We have to take what people like and keeps them on their old platform, and best preserve the intent of what they want on the new platform. Doing so requires discussing the features those big tech companies

    And as users, when we select the platforms we use, we need to weigh the cost of going with an alternative vs going with a giant. No solution is a perfect solution for everyone, and the chooser needs to weigh the maintenance cost (in hours or money) they will incur, how their users will like/dislike it, and maybe even look at a piece of software and decide "nah the vibes are off".

    I'd love a world where those three tech giants had proper competition in all fields, and I think their business practices are scummy and need improvement. But the real alternatives to each need some polish before they're ready to be used by [arbitrary tech illiterate grandmother].

  • Why would that be illegal? Shouldn't there be some way to plug an older flash drive or console cable into a laptop that doesn't have a type A port? (Ahem, Mac)

  • A to B made more sense in a world where devices cannot serve as both roles via negotiation. My android phone when I got it utilized a data transfer method of plugging my iPhone charge port into my Android charge port, then the Android initiated the connection as a host device.

    The true crime is not that the cable is bidirectional, the true crime is that there is little to no proper distinction and error checking between USB, Thunderbolt, and DisplayPort modes and are simply carried on the same connector. I have no issues with the port supporting tunneled connections - that is in fact how docking stations work - just the minimal labeling we get in modern devices.

    I'd be fine with a type-A to type-A cable if both devices had a reasonable chance at operating as both the initiator and target - but that type of behavior starts with USB-OTG and continues in type-C.

  • Others have some good information here - all I'd like to add to the root is that Windows and Mac have a built-in DNS cache and it's pretty straightforward to add a DNS cache to systemd distros (if it's not already installed or in use) using systemd-resolved or dnsmasq if you really dislike systemd. Some distros enable this from install time.

    Systems that utilize a DNS cache will keep copies of DNS query results for a period of time, making the application-level name lookup speed essentially 0ms for a cached result. Cold results obviously incur the latency of the DNS server itself.

  • HLS is a bidirectional protocol though - the system's total network latency affects how quickly it can change to a new bitrate stream as conditions improve or degrade. And despite the name, it's not just limited to live content. You can use this to deliver fixed-length content

    https://en.wikipedia.org/wiki/HTTP_Live_Streaming

  • As much as I dislike how Intel works sometimes, this market does not need fewer competitors.

  • Far-UVC has a lot of potential once it's scaled up. Right now, we're still learning about best practices.

    Institutions should be adopting this tech at scale.

    If we're still learning about best practices why are we talking about deploying this at scale? Self contradictory article.....

    It should be the other way around. Figure out if it works academically, then test small scale, then scale up with proven and reproducible results. That's how science works. Best practices can be formulated and adjusted at each stage as more knowledge is gained. That's how we don't make a massive health mistake and give an entire convention center indoor sunburns. Especially for people who might be more sensitive to sunburns.

  • Not on a flash based motherboard (so basically almost everything recent). On modern systems usually the only thing the battery powers is the clock, which is why they have a separate reset to defaults header/button/switch.

    (The CMOS memory of old is replaced with flash memory, al la SD Card or flash drive)

  • (USA) Having eaten at Dominos, Papa Johns, and a large selection of local places only one local place was worse then Dominos. The rest were all light-years better.

  • They confirmed that there was a range of CPUs affected by a fabrication issue outside of the press release that went to media. So while we know about the i7/i9, manufacturing process is often shared between different CPU models and with Intel being opaque about what they found it's hard to understand what actually happened and what's truly unaffected.

    Ref: GamersNexushttps://youtu.be/OVdmK1UGzGs

  • True?

    Jump
  • Gotcha. Yeah low level Unix has some weird stuff going on sometimes.

  • True?

    Jump
  • Oh thank goodness, that was one of my main complaints with the system. Did they ever get around to requiring sudo like Macports (and any other reasonable system level packages manager on BSD/Linux)?

  • True?

    Jump
  • After Crowdstrike are we sure it's not all blue screens in the windows column?

  • True?

    Jump
  • If it's anything like when I used a Mac regularly 7y ago, Homebrew doesn't install to /bin, it installs to /usr/local/bin, which only works for scripts that use env in their shell "marker" (if you don't call it directly with the shell). You're just putting a higher bash in the path, not truly updating the one that comes with the system.

  • TLDR: probably a lot of people continue using the thing that they know if it just works as long as it works well enough not to be a bother.

    Many many years ago when I learned, I think the only ones I found were Apache and IIS. I had a Mac at the time which came pre installed with Apache2, so I learned Apache2 and got okay at it. While by release dates Nginx and HAProxy most definitely existed, I don't think I came across either in my research. I don't have any notes from the time because I didn't take any because I was in high school.

    When I started Linux things, I kept using Apache for a while because I knew it. Found Nginx, learned it in a snap because the config is more natural language and hierarchical than Apache's XMLish monstrosity. Then for the next decade I kept using Nginx whenever I needed a webserver fast because I knew it would work with minimal tinkering.

    Now, as of a few years ago, I knew that haproxy, caddy, and traefik all existed. I even tried out Caddy on my homelab reverse proxy server (which has about a dozen applications routed through it), and the first few sites were easy - just let the auto-LetsEncrypt do its job - but once I got to the sites that needed manual TLS (I have both an internal CA and utilize Cloudflare' origin HTTPS cert), and other special config, Caddy started becoming as cumbersome as my Nginx conf.d directory. At the time, I also didn't have a way to get software updates easily on my then-CentOS 7 server, so Caddy was okay-enough, but it was back to Nginx with me because it was comparatively easier to manage.

    HAProxy is something I've added to my repertoire more recently. It took me quite a while and lots of trial and error to figure out the config syntax which is quite different from anything I'd used before (except maybe kinda like Squid, which I had learned not a year prior...), but once it clicked, it clicked. Now I have an internal high availability (+keepalived) load balancer than can handle so many backend servers and do wildcard TLS termination and validate backend TLS certs. I even got LDAP and LDAPS load balancing to AD working on that for services like Gitea that don't behave well when there's more than one LDAPS backend server.

    So, at some point I'll get around to converting that everything reverse proxy to HAProxy. But I'll probably need to deploy another VM or two because the existing one also has a static web server and I've been meaning to break up that server's roles anyways (long ago, it was my everything server before I used VMs).

  • A static PNG tile database for world.osm is even larger. Without a solid vector tile solution, this is the most efficient data format for disk space.

    Also, there's a post render CDN cache in front of the rendering layer to offset load, plus there's I think some internal caching in renderd. It's a pretty complex machine, but databases of the world are in fact huge.

  • OSM's core tile servers have dozens of cores, hundreds of GB of RAM each, and the rendering and lookup databases are a few TB. That's not trivial to self host, especially since one self hosted tile server cannot always keep up with a user flick scrolling.

    Edit: car GPS maps and the old TomTom and Garmin devices have significantly less metadata embedded than a modern map.