Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)F
Posts
1
Comments
710
Joined
2 yr. ago

  • They won't ban corporate VPNs.

  • Wow you can tell from the first paragraph that this isn't worth reading. I read it... just out of curiousity...

    For some reason the whole discussion around this Rust/C/Linux/GNU/thing is mostly focused around superficial and irrelevant things like the sexualities and genders of the Rust people

    Err....

    Rust people seem to be focused mostly on identity politics and dividing people into groups that are then supposed to fight each other. As I wrote earlier, I didn't invent the term "Rust people" myself - those people themselves identify as "Rust people", which is not a good thing. I code mostly in C and assembly, but I certainly don't identify as a "C person". I can also write other programming languages, and I would even learn Rust if it wasn't such a horrible Trojan horse that is clearly designed to destroy computing freedom.

    .... yeah. I can confirm he has zero sane points. Let's not give this lunatic any credence.

  • Nw. You're also wrong about endianness. This function would be written exactly the same irrespective of endianness:

     
        
    uint32_t u16_high_low_to_u32(uint16_t high, uint16_t low) {
      return (high << 16) | low;
    }
    
      

    That is endian agnostic.

  • Yeah it actually is fairly common to have the high word first because humans unfortunately picked the wrong endianness, and integers are written in big endian.

    E.g. what value would you expect from u16x2_to_u32(0x1122, 0x3344)? If you said 0x11223344...

    Still, the rant is stupid because all that needs to happen is to fix the name.

    Honestly it's really surprising that the kernel doesn't already have a library of reliably but manipulation functions for common stuff like this.

  • What makes you think it's making a pointer? Nobody said anything about that.

  • this is about non-generic code in generic header.

    (a << 16) | b is about the most generic code you can get. How is that remotely RISC-V specific?

  • getting called out will make the person really review their next submission.

    Yeah or they'll say "fuck this" and quit.

    The expectation that somebody always has to be nice to you while you fuckup, is not ideal.

    It's hardly a fuck up. They named a function slightly poorly. As if Linus has never done that.

  • Sort of. He's definitely right that make_u32_from_two_u16 is a terrible function name that obscures the meaning but I don't think he's right that the best solution is to inline it. C bit shifting is notoriously error prone - I've seen this bug multiple times:

     
        
    uint32_t a = ...;
    uint32_t b = ...;
    uint64_t c = (a << 32) | b;
    
      

    The real problem is the name isn't very good. E.g. it could be u32_high_low_to_u64 or something. Might clearer. Certainly easily at kernel code levels of clarity.

    (Really the naming issue comes from C not having keyword arguments but you can't do anything about that.)

  • Phoronix has notoriously dumb commenters. I don't know why exactly but it's really notable.

    Hackaday too. Again, not sure why. They're both significantly worse than Reddit, HN, Ars or here. Maybe even worse than YouTube comments...

  • Since closed-source is frowned upon in the Linux world

    Indeed, this is a root cause of the problem. But it is a problem. The Linux community needs to get off its high horse and make distribution of binary programs (which may or may not be open source) work properly.

    Snap and Flatpak are definitely a step in the right direction at least.

  • Right but in practice nobody really uses the Windows store, and winget, chocolatey etc. are only used by geeks. For normal users it's always

    1. Download .exe or .msi
    2. Double click it.
    3. Follow the instructions.

    On Linux you have:

    1. apt, dnf, etc. - pretty reliable but only really work from the command line (I have yet to use a "friendly" store frontend that actually works well), and you almost always get an outdated version of the software.
    2. Snap or Flatpak - the idea is there, but again I have yet to actually use one of these successfully. They always have issues with GUI styling (e.g. icons not working), or permissions, or integration or something.
    3. Compiling from source - no Windows software requires this but it's not uncommon on Linux.

    Also it's relatively common for Linux software not to bundle its dependencies. I work for a company that makes commercial Linux software and they bundle Python (yes it's bad), but that depends on libffi and they don't bundle that. So it only works on distros that happen to have the specific ABI version of libffi that it requires. And you have to install it yourself. This is obviously dumb but it's the sort of thing you have to deal with on Linux that is simply never an issue on Windows or Mac.

    1. Reliable hardware support. Especially on laptops - as far as I know it's still basically impossible to get battery life as good in Linux as in Windows/Mac.
    2. Sane software distribution method that actually works reliably.
    3. All settings accessible via the GUI. The terminal is still the default for most things. For example google how to disable SELinux (something most users should probably do). You have to edit /etc/selinux/config which is really quite complicated for "normal" users.

    I think those are the main things. I think it would also help if KDE were the "default" desktop environment instead of Gnome. It's much better, with one caveat - they seem incapable of good visual design! Don't get me wrong, it's a lot better than when KDE 5 first came out, but there are still very obvious spacing issues, and Gnome never has those.

  • Well yeah because you only hear the bad things. Everything I hear about the US makes me feel slightly better about living in the UK. How are those school shootings going?

  • Really tempting but you can get such good computers second hand these days. I got a Ryzen 9 3950X (a few years old but 16 core and still awesome), with 128 GB of RAM and a 1TB SSD for £325. No way I'm paying 6 times that for a new machine that's 50% faster at best.

  • lib.rs has a special surprise when you search "twitter"

    Jump
  • Yeah also calling anything where you raise your hand a "Nazi-like salute" is dumb af. Musk does enough real shit without having clutch at straws like this.

  • The only rule you need is: preserve history that is worth preserving.

    99% of the time, that means you should squash commits in a PR. Most commits should be small enough that they don't need more fine grained history than one commit.

    I will grant a couple of exceptions:

    1. Sometimes you have refactorings where you e.g. move a load of files and then do something else.. Or do a big search and replace and then fix the errors. In these cases it's nice to have the file moves or search/replace in separate commits to a) make review easier, b) make the significant changes easier to see, and c) let git track file moves reliably.
    2. Sometimes you have a very long lived feature branch that multiple people have worked on for months. That can be worth keeping history for.

    Unfortunately, if you enable merge queues on GitHub it forces you to pick one method for all PRs, which is kind of dumb. We just use squash merges for everything and accept that sometimes it's not the best.

  • Not really because I've never seen a setup that requires every commit in a branch to compile and pass tests. Only the merge commit needs to.

    Also if your PR is so big that it would be painful to bisect within it, then it should be broken into smaller PRs.

  • I don't think anyone disputes that, it's just that nobody has come up with anything better.

    Take home exercises were a potentially better option (though they definitely have other big downsides) but they aren't a sensible choice in the age of AI.

    Just taking people's word for it is clearly worse.

    Asking to see people's open source code is unfair to people who don't have any.

    The only other option I've heard - which I quite like the sound of but haven't had a chance to try - is to get candidates to do "live debugging" on a real world bug. But I expect that would draw exactly the same criticisms as live coding interviews do.

    What would you do?