Skip Navigation

peeonyou [he/him]

@ peeonyou @hexbear.net

Posts
4
Comments
1229
Joined
2 yr. ago

california comrade rooting for the orcas killin yachts

  • sponsorblock added

  • ok i admit it, i did not

  • I see. Well they're two entirely different architectures so that's not really going to happen. ARM will likely always use less power without some heavy investment in wizardry for the x64 architecture because of the design.

    It's not new however. Back in the 90s my dad used to say RISC (MIPS/arm) was going to trounce CISC (x86) due to the inbuilt efficiency of the architecture but there are many other factors that determine market winners.

  • do what? have a paywall for youtube?

  • caught up in what way?

  • welp.. lucky guy i just sold my 4090 to for $2k. prices of those are gonna shoot up too i imagine

  • :stop:

    Jump
  • time

  • HexBear AI

    /s

  • I think really the only way this would be a workable solution for most cases would be to use it in a RAID type setup with striping and parity or something similar. That's just horribly slow and the read-back isn't a whole ton better either.

  • 360TB written at a maximum speed of 4 MB/s would take almost 3 years to complete

  • damn.. never heard anyone pronounce it like that before.. lucky them that they've not been subjected to nike ads their entire lives I guess

  • no no, they say "NIGH-ck" instead of "NIGH-key" which is the correct pronunciation.

  • and things they can lock you up for the rest of your life for and then give themselves pretty plaques and ask for more gubmint money to keep up the good work

  • this has been the case with nearly every single 'terror plot' since 9/11/2001, and likely many prior.. IIRC the original WTC bombing had some FBI involvement as well.

  • is this entire video AI made? lots of weird errors on the graphs, the images are definitely AI, and i don't think i've ever heard of anyone pronounce nike as "nigh-k" before

  • many people agree, tremendous ASMR

  • i wasn't able to get llama.cpp to run it even after pulling latest master and rebuilding because of an unknown architecture. chatgpt told me to pull a specific branch and PR and rebuild:

     
        
    git fetch origin pull/18058/head:nemotron3
    git checkout nemotron3
    
    cmake -S . -B build -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON -DLLAMA_CURL=ON
    cmake --build build --config Release -j --clean-first --target llama-server
    
      

    and that did the trick

    Also, this thing is flying. I'm using Q4_K_M on my 5090 and i'm getting 220 t/s on average.

  • i bought a top of the line amd card a few years back and it was a goddamned nightmare trying to get it to work in linux.. i never did get fully functioning before i returned it and bought an nvidia card which worked right out of the box.

    anyway, congrats on using linux! i also have been using mint for many many years and it works well enough that i don't want to mess with other distros.

  • Shouldn't the word be totalitarian instead of authoritarian?

  • chat @hexbear.net

    what is this tagline referencing?

  • music @hexbear.net

    TILT - Libel

  • news @hexbear.net

    In one of the richest cities, of one of the richest states, in one of the richest countries in the world, they will now charge $427 for calls to 911

    www.mercurynews.com /2025/03/26/san-jose-institute-first-responder-fee-medical-calls/
  • United States | News & Politics @lemmy.ml

    Its Harrisover