Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)A
Posts
1
Comments
249
Joined
2 yr. ago

  • It's a 0.x release. It makes sense building the intended features first before optimizing heavily. There's no point having an optimized data structure that then falls flat once you need to add new features that brings new requirements to the data structure.

    Once they label it 1.x (i.e. feature complete and production ready) I would expect it to be optimized. If it isn't, criticism is warranted.

  • The linked ticket also references a merge request that went stale. So I would assume this is a good starting point (I haven't looked at the MR though, so I don't know how far off from the potentially accepted solution it is).

  • I don't think there is a technical reason. Simply no one was interested in implementing it yet. See Nate's answer over at reddit and the associated ticket.

    So once someone is motivated enough it will happen. But without contribution or extreme boredom by the core mainteners (haha) it won't happen.

  • Well exactly as you say: it's a single service instead of having to combine multiple. In my case dovecot was a lot faster for my mailboxes, but postfix was a piece of shit and I was happy to get rid of it and the many components (rspamd, dkimproxy, etc.) it required. It has far too many footguns, and I shot myself multiple times with them over the years. So the most important part (SMTP) is significantly simpler and IMO better with stalwart. And the mailbox part hopefully evolves as well (it already has JMAP, so that is already an advantage over dovecot as well).

  • Use Stalwart as mailserver. Besides coming with sane defaults, it allows to put hooks into almost every mail stage. Those hooks can be sieve scripts, local binaries or http calls.

  • It kind of is, unfortunately. Games are often developed with a lot of pressure and the constant dangling of the budget being cut off. I don't think the devs are incompetent and think what they produced (code quality wise) would be the best, but what could they do if they need a result to present to the publisher end of week and then don't get money (aka time) to clean it up but instead they get the next deadline.

    On the other hand I am also not sure I can blame publishers. Things can easily spiral out of control if managed badly in the other direction.... see Cloud Imperium Games (i.e. Star Citizen).

  • Yeah but it also shows the weird naming of WSL. It's Windows (32) on Windows 64, but Windows Subsystem for Linux instead of Linux on Windows 64 (which would at least have fit the pattern).

  • I talk fully about software. Add appropriate nftable rules to the container network and that's it.

  • For me it's not even about better or worse, but about different. For them it's a nice iteration after many years, but for be it is one of the dozens of apps I use irregularly that suddenly behaves and works different and forces me to relearn things I don't have any gain from. Since each of the different apps get that treatment every once in a while, I end up having to adjust all the damn time for something else.

    I would really like we could go back to functional applications being sold as is without forced updates. I do not need constant changes all the time. WinAmp hasn't changed in 20 years and still does exactly what it is supposed to. I could probably spin up an old MS Word 2000 and it would work just like it did 20 years ago.

    Many modern apps however change constantly. No wonder they all lean towards subscriptions if they "have to" work on it all the time. But I, as a user, don't even want that. I want to buy the thing that does what it's supposed to and then I want it to stay that way.

  • Well, a big advantage of containers is, that you can isolate them pretty aggressively. So if you run a container that is supposed to serve content on a single HTTP port, expose only that port, mount no unnecessary volumes and run it on a network that blocks all outgoing traffic. Ideally the only thing left will be incoming traffic on the one port the service is supposed to serve.

  • btrfs because it was simple

    Personally I found ZFS far more simple. The userspace tools make more sense to me. Also I like, that volumes can have a default (relative) mount point attached. So in a recovery scenario, I simply have to open the zpool with a relative base path, and then have all my volumes ready to go. If I want to recover a btrfs system with multiple subvolumes, I typically need to know exactly which ones and where to I have to mount them (each individually).

    Also I go really used to zfsbootmenu.

  • Microsoft really has a knack for that. I also like WoW64, which contains the binaries for running 32 bit applications on Windows 64 bit. For historical reasons, the 64 bit binaries live in system32, obviously.

  • Half off-topic, sorry: if you have some spare time on the weekend, you might want to take a look at nftables. AFAIK iptables is also just using nftables under the hood, so you are basically using a deprecated technology.

    nftables is so much nicer to work with. In the end I have my custom rules (which are much saner to define than in iptables) in /etc/nftables.conf, then I have a very simple systemd unit:

     
        
    [Unit]
    Description=Restore nftables firewall rules
    Before=network-pre.target
    
    [Service]
    Type=oneshot
    ExecStart=/usr/sbin/nft -f /etc/nftables.conf
    ExecStop=/usr/sbin/nft flush table inet filter
    RemainAfterExit=yes
    
    [Install]
    WantedBy=multi-user.target
    
    
      

    and finally if I push updates via ansible I simply replace the file and run nft -f /etc/nftables.conf (via ansible; on-change event).

    Edit: oh and as an example how the actual rules file looks like:

     
        
    #!/usr/bin/nft -f
    
    add table inet filter
    flush table inet filter
    
    table inet filter {
      chain input {
        type filter hook input priority 0;
    
        # allow established/related connections
        ct state {established, related} accept
    
        # early drop of invalid connections
        ct state invalid drop
    
        # allow from loopback
        iifname lo accept
    
        # allow icmp
        ip protocol icmp accept
        ip6 nexthdr icmpv6 accept
    
        # core services
        tcp dport {80, 443} accept comment "allow http(s)"
        udp dport 443 accept comment "allow http3"
    
        # everything else
        reject with icmpx type port-unreachable
      }
    
    }
    
    
      

    and with that I have my ipv4+6 firewall that allows pings and http

  • The shopping list alone is beautifully done. Glad that I could help 🙂

  • There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors.

    -- Leon Bambrick

  • Regarding your requirement, you might want to take a look at KitchenOwl.

    If you prefer freestyle notes/lists, Joplin can share and sync note collections as well.

  • KDE is one of the main reasons for me to use Linux. I immensely like the performance, silence and battery lifetime of MacBooks. But if I have to work with anything but KDE, it's not worth it for me. The only thing OSX does better than basically any other desktop out there, is the ability to drag whole virtual screen between monitors.

  • CryptPad is absolutely fantastic. Easy to host and secure design.

  • I can understand Hellwig's fear, though.

    From what I gather as a bystander, it's apparently common that a refactoring in your module that breaks its API will involve fixing all the call-sites to keep the effort on the person responsible for the change. Now the Rust maintainers say "it's fine; if it breaks, we'll deal with it" which is theoretically takes away the cross-language issue for the C-maintainer. Practically I can very well see, that this will still cause friction in the future.

    Let's say such a change happens and at that time there's a bit of time pressure and the capacity on the rust maintainers is thing for whatever reasons. Will they still happily swallow that change or will they start to discuss if it's really necessary to do that change? And suddenly, the C-maintainer has a political discussion on top of the technical issue they wanted to solve.

    As someone who just wants to get shit done, I would definitely have that fear.

    (That doesn't mean it's still a bullet not worth swallowing. The change overall can still be worth the friction. I am just saying that I think it's not totally unwarranted that a maintainer feels affected by this even though current pledges from the other parties promise otherwise; this stance can change or at least be challenged over and over.)