• 0 Posts
  • 72 Comments
Joined 1 year ago
cake
Cake day: September 25th, 2023

help-circle
  • pixelscript@lemmy.mltoOpen Source@lemmy.mlKrita FTW
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    I mean, you’re free to continue using your crescent wrench as a hammer if you find it drives nails for you decently well and you are comfortable using it that way. But it was neither designed with that purpose in mind, nor does anyone expect you to use it that way, so no one will be writing how-to guides on it.


  • A Post-It and a pencil, usually.

    Not because “app bad” or “return to monke” or anything like that. Mostly because if I stow the note in a dedicated app, that somehow just makes me less inclined to write it down and read it later.

    A scrap of papersticking out like a sore thumb on my desk or burning a hole in my pocket? I’m going to be cognizant of that all day long. But an obscure text file chilling in a disused part of my phone, or a txt file lost in the shuffle of random shit on my PC? Outta sight outta mind.

    I also find all digital input schemes to be frustratingly less flexible than physical paper. Provided I have a writing utensil on hand that is functional (not always a given, granted) it is trivial to put anything I want on a note. Write anything I want. Draw diagrams. Underline or strike text. Write some things larger or heavier than others. All of these things are possible in note taking apps, but they come with the idiosyncracies of needing to know the selection techniques and menu options to activate them. In this way they’re all death by a thousand tiny annoying cuts for me.

    I even had a smart phone with a built-in stylus for a good long while. It definitely extended the things you could do with ease, but it was a far cry from a pencil.

    The only thing a note taking app can do in my mind that paper can’t is yell at you with a loud noise at a pre-programmed time. If I need one of those, I just set an alarm in my clock app.



  • I always hear this statistic on how proper zipper merging increases traffic flow rate over no strategy at all, and I simply do not understand how it helps.

    They keep pointing to how much of the upstream second lane is “wasted”. But like, from a strict perspective of flow rate, is it really?

    The bottleneck restricting flow is the reduced speed single lane. Put a vehicle counter on it. Assuming no one wastes time getting through whatever funnel point there is, this flow is consistent. The same number of cars passing at the same speed are getting through regardless of whether the zipper point was a few cars back or ten kilometers back. Unless I can hear an explanation on how zipper merging changes this I remain unconvinced.

    Zipper merging still has unquestionable advantages that are obvious to glean, of course.

    Putting the merge point as close to the blockage as possible minimizes the time spent in the shared lane. Flow is the same, but the overall time spent in the jam is averaged over all drivers.

    That “wasted lane” does not, as far as I can tell, improve flow. But it does improve storage. If cars are piling up at the choke point, utilizing the full extra lane keeps the pilup from backing up as far down the road, reducing potential domino effects through the road system.

    Zipper merging is fairer to all vehicles by promoting a FIFO processing order. No one in the closed lane gets screwed, everyone gets through in roughly the order they showed up.

    It has lots of advantages, and is clearly the winner, but I fail to see how increased flow is one of them.

    Of course, I’m making a lot of assumptions about perfect behavior of drivers, while this statistic is supposedly real-world empirical data. That suggests there are significant inefficiencies in real-world human driving, and that the zipper merge addresses them somehow. But I can’t fathom what those are or why zipper merging is relevant to them.



  • For work, it’s usually IDE on the right (my larger screen) and a live build of the thing I’m working with on the left (a laptop screen). Though it varies a lot throughout the day. Primary screen gets the app that needs most scrutiny, small screen gets auxilliary things like passive communication apps or reference materials.

    For home use, where I have two monitors of equal size, it’s usually Discord on one screen and a web browser on the other. Comms on the left and active task on the right.

    I don’t see a use case in my workflow for a third screen, especially not one that is a weird size or is in portrait orientation. But if one was simply bestowed upon me, I’m sure I’d find something to do with it sooner or later. There was a time where I though two monitors was overrated, I’m sure I can adapt my opinion again for 3+.


  • As an American who was raised Lutheran, who was taught a bunch of Romance-Euro-centric world history in school, I always considered Roman Catholic to be the “default” flavor of Christianity. Protestantism in all of its forms are hard forks. It’s in the name, even–the Roman Catholic church is what Protestants are “protesting”.

    To unironically “-and Zoidberg” Catholicism out of Christianity while leaving Protestant flavors included feels completely backwards. I’ve never heard anyone do it.

    But if I did, I could only assume it was due to some No True Scotsman bullshit. “Only we practice the correct way. Everyone else isn’t just interpreting it differently, but interpreting it wrong.” Sounds like an Evangelical line of thought to me.



  • As a very strong believer in Danny DeVito’s quote, “When I’m dead, just throw me in the trash!”, if any medical party is even remotely interested in dumpster diving for my parts when I’m done with them, they can have 'em. Better than throwing them in a box and taking up land in a cemetary. The less of my remains uselessly taking up space on this planet after death, the better. If I get my way upon my demise, anything they don’t take is going into the incinerator anyway.


  • pixelscript@lemmy.mltoArch Linux@lemmy.mlHow often do you update your system?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    7 months ago

    I just click the litte nag icon in my taskbar whenever I notice it.

    Since I’m on Debian Testing that is often daily. But it varies. If I don’t look at that part of my screen that day, w/e.

    I thought I turned on auto update so it would just do it on its own. But it didn’t work for whatever reason. Sigh… Linux moment. There is an answer, surely, but the cost of debugging it outweighs my patience. Typing in my password an extra once(ish) a day is fine, I guess.

    Edit: Just realized this is the Arch community. D’oh.



  • I replied to that thread.

    OP was claiming to be working on a static HTML-serving search engine. They suggested that because it’s just HTML and CSS, and that interested parties can use Inspect Element to read the network requests, that it constituted “open source”.

    Commenters then got on his case about not open sourcing the server backend. OP defended that choice saying they didn’t want a competitor taking their code and building a company off of it that would “drive [them] out of business”. Uh-huh. So, proprietary software, then. Bye.





  • Nvidia and AMD broadly cover the same use cases. Nvidia cards are not intrinsically better to my knowledge, Nvidia simply offers ultra high-performance cards that AMD doesn’t.

    If you just need nonspecific games to run decently, a card from either brand will do it. If you need to run the most intensive games there are on unbelievable settings, that’s when Nvidia should be edging out.

    ML dabbling may complicate things. Many (most?) tools are written for CUDA, which is a proprietary Nvidia technology. I think AMD offers a counterpart but I do not have details. You will need to do more research on this.


  • pixelscript@lemmy.mltoProgrammer Humor@lemmy.mlDamn Linux Users
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    8 months ago

    I am going to continue to tell people “just get an AMD card”, but only if they have indicated to me that they are shopping for new parts and haven’t committed to any yet.

    Giving that advice to someone who already has an Nvidia card is just as useless as those StackOverflow answers that suggest you dump your whole project architecture and stuff some big dumb library into your build to solve a simple problem.




  • It’s a huge win, but not the kind of win people reading the statistic with no context (like me) probably thought.

    I’m sure a lot of us looked at “15 percent of desktop PCs in India run Linux” and, regardless of whether it was hasty and irresponsible for us to do so, extrapolated that to, “15 percent of Indian PC users are personally selecting Linux and normalizing its paradigms”.

    But in reality, it sounds more like “15 percent of Indian PC users use Linux to launch Google Chrome”. Which is impressive, but not the specific kind of impressive we wanted.

    It feels a bit like how I imagine, say, a song artist feels when they pour their heart and soul into a piece of music, it gets modest to no traction for a while, and then years later a 20 second loop becomes the backing track for a massive Tiktok meme, and almost zero of that attention trickles back to their other work.