• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle
  • Whoops! When I looked at the second time that the shift value is calculated, I wondered if it would be inverted from the first time, but for some reason I decided that it wouldn’t be. But looking at it again it’s clear now that (1 - i) = (-i + 1) = ((~i + 1) + 1), making bit 0 the inverse. Then I wondered why there wasn’t more corruption and realized that the author’s compiler must perform postfix increments and decrements immediately after the variable is used, so the initial shift is also inverted. That’s why the character pairs are flipped, but they still decode correctly otherwise. I hope this version works better:

    long main () {
        char output;
        unsigned char shift;
        long temp;
        
        if (i < 152) {
            shift = (~i & 1) * 7;
            temp = b[i >> 1] >> shift;
            i++;
            output = (char)(64 & temp);
            output += (char)((n >> (temp & 63)) & main());
            printf("%c", output);
        }
    
        return 63;
    }
    

    EDIT: I just got a chance to compile it and it does work.


  • I first learned about Java in the late 90s and it sounded fantastic. “Write once, run anywhere!” Great!

    After I got past “Hello world!” and other simple text output tutorials, things took a turn for the worse. It seemed like if you wanted to do just about anything beyond producing text output with compile-time data (e.g. graphics, sound, file access), you needed to figure out what platform and which edition/version of Java your program was being run on, so you could import the right libraries and call the right functions with the right parameters. I guess that technically this was still “write once, run anywhere”.

    After that, I learned just enough Java to squeak past a university project that required it, then promptly forgot all of it.

    I feel like Sun was trying to hit multiple moving targets at the same time, and failing to land a solid hit on any of them. They were laser-focused on portable binaries, but without standardized storage or multimedia APIs at a time when even low-powered devices were starting to come with those capabilities. I presume that things are better now, but I’ve never been tempted to have another look. Even just trying to get my machines set up to run other people’s Java programs has been enough to keep me away.







  • Did you read all the way to the end of the article? I did.

    At the very bottom of the piece, I found that the author had already expressed what I wanted to say quite well:

    In my humble opinion, here’s the key takeaway: just write your own fucking constructors! You see all that nonsense? Almost completely avoidable if you had just written your own fucking constructors. Don’t let the compiler figure it out for you. You’re the one in control here.

    The joke here isn’t C++. The joke is people who expect C++ to be as warm, fuzzy, and forgiving as JavaScript.







  • That was my first take as well, coming back to C++ in recent years after a long hiatus. But once I really got into it I realized that those pointer types still exist (conceptually) in C, but they’re undeclared and mostly unmanaged by the compiler. The little bit of automagic management that does happen is hidden from the programmer.

    I feel like most of the complex overhead in modern C++ is actually just explaining in extra detail about what you think is happening. Where a C compiler would make your code work in any way possible, which may or may not be what you intended, a C++ compiler will kick out errors and let you know where you got it wrong. I think it may be a bit like JavaScript vs TypeScript: the issues were always there, we just introduced mechanisms to point them out.

    You’re also mostly free to use those C-style pointers in C++. It’s just generally considered bad practice.



  • As someone who has often been asked for help or advice by other programmers, I know with 100% certainty that I went to university and worked professionally with people who did this, for real.

    “Hey, can you take a look at my code and help me find this bug?”
    (Finding a chunk of code that has a sudden style-shift) “What is this section doing?”
    “Oh that’s doing XYZ.”
    “How does it work?”
    “It calculates XYZ and (does whatever with the result).”
    (Continuing to read and seeing that it actually doesn’t appear to do that) “Yes, but how is it calculating XYZ?”
    “I’m not 100% sure. I found it in the textbook/this ‘teach yourself’ book/on the PQR website.”


  • Most people use the term “Hungarian Notation” to mean only adding an indicator of type to a variable or function name. While this is one of the ways in which it has been used (and actually made sense in certain old environments, although those days are long, long behind us now), it’s not the only way that it can be used.

    We can use the same concept (prepending or appending an indicator from a standard selection) to denote other, more useful categories that the environment won’t keep straight for us, or won’t warn us about in easy-to-understand ways. In my own projects I usually append a single letter to the ends of my variable names to indicate scope, which helps me stay more modular, and also allows me to choose sensible variable names without fear of clashing with something else I’ve forgotten about.