Skip Navigation

Posts
2
Comments
116
Joined
4 yr. ago

(_(_________(#)~~~~~~

  • I'm starting to think that feds are spamming the "TAX" idea as a distraction. If you genuinely believe that "taxing the rich" will get us Communism then you need to read more. "The Principles of Communism", "Wage Labour and Capital" and "The Communist Manifesto" are good starting points.

  • One time I asked DeepSeek for guidance on a more complex problem involving a linked list and I wanted to know how a simple implementation of that would look like in practice. The most high level I go is C and they claim it knows C, so I asked it to write in the C language. It literally started writing code like this:

     
        
    void important_function() {
        // important_function code goes here
    }
    
    void black_magic() {
        // Code that performs black magic goes here.
    }
    
      

    I tried at least 2 more times after that and while it did actually write code this time, the code it wrote made no sense whatsoever. For example one time it started writing literal C# in the middle of a C function for some reason. Another time it wrongly assumed that I'm asking for C++ (despite me explicitly stating otherwise) and the C++ it produced was horrifying and didn't even work. Yet another time it acted like the average redditor and hyper focused on a very specific part of my prompt and then only responded to that while ignoring my actual request.

    I tried to "massage" it a lot in hopes of getting some useful information out of it but in the end I found that some random people's Git repos and Stackexchange questions were way more helpful to my problem. All of my experiences with LLMs have been like this thus far and I've been messing with them for 1+ years now. People claim they're very useful for writing repetitive or boiler plate code but I am never in a position where I'd want or need that. Maybe my use cases are just too niche lol.

  • A few days ago I was configuring some software where it's difficult to find good documentation about so I decided to ask DeepSeek. I described what I'm trying to do and asked if it could give me an example setup so I can get a better understanding. All it did was confidently make shit up and told me things that I already knew. And that's only the most recent example. I have yet to find LLMs be a useful tool.

  • That's only been my experience with software that depends on many different libraries. And it's extra painful when you find out that it needs hyper specific versions of libraries that are older than the ones you have already installed. Rust is only painless because it just downloads all the right dependencies.

  • Some old software does use 8-Bit ASCII for special/locale specific characters. Also there is this Unicode hack where the last bit is used to determine if the byte is part of a multi-byte sequence.

  • I saw this coming years ago. But still, reading this makes my blood boil. Wouldn't be surprised if they're also trying to find ways to put large screens on the moon's surface to display ads. Fucking bloodsuckers.

  • I always have to lol whenever I see a lib being very serious about some comically overblown made up shit. It's like a kid trying to tell me they saw a big scary monster in the woods.

  • Sent me a message from 2 different accounts. It's just spam mail.

  • This reads like it was written by some LLM.

    Enable journaling only if needed:tune2fs -O has_journal /dev/sdX

    Don't ever disable journaling if you value your data.

    Disk Scheduler OptimizationChange the I/O scheduler for SSDs:echo noop > /sys/block/sda/queue/schedulerFor HDDs:echo cfq > /sys/block/sda/queue/scheduler

    Neither of these schedulers exist anymore unless you're running a really ancient Kernel. The "modern" equivalents are none and bfq. Also this doesn't even touch on the many tunables that bfq brings.

    Also changing them like they suggest isn't permanent. You're supposed to set them via udev rules or some init script.

    SSD Optimization Enable TRIM:fstrim -v /Optimize mount settings:mount -o discard,defaults /dev/sdX /mnt

    None of this changes any settings like they imply.

    Optimized PostgreSQL shared_buffers and work_mem.Switched to SSDs, improving query times by 60%.

    No shit. Who would've thought that throwing more/better hardware at stuff will make things faster.

    EDIT: More bullshit that I noticed:

    Use ulimit to prevent resource exhaustion:ulimit -n 100000

    Again this doesn't permanently change the maximum number of open files. This only raises the limit for the user who runs that command. What you're actually supposed to do is edit /etc/security/limits.conf and then relog the affected user(s) (or reboot) to apply the new limits.

    Use compressed swap with zswap or zram:modprobe zram echo 1 > /sys/block/zram0/reset

    This doesn't even make any sense.

  • I've given up on Element a long time ago. I've been bouncing between Nheko and NeoChat since those are the most mature Matrix clients that aren't bloated webapps.

    What bugs me more is encryption still is kind of a mess across devices and clients. Also I hate how there still isn't any alternative Matrix Server that's not Synapse or Dendrite and isn't abandoned, doesn't suck, is relatively fast and supports at least most of the Protocol. Can't even really blame people because I've tried writing my own Matrix Server and Client before but I eventually gave up because the protocol is what I call "A JSON clusterfuck". Why can't the protocol be as simple as IRC? Why does it always have to be JSON over HTTP?

  • Interesting feature, I had no idea. I just verified this with gcc and indeed the return register is always set to 0 before returning unless otherwise specified.

     
        
    int main(void)
    {
        int foo = 10;
    }
    
      

    produces:

     
        
    push   %rbp
    mov    %rsp,%rbp
    movl   $0xa,-0x4(%rbp) # Move 10 to stack variable
    mov    $0x0,%eax       # Return 0
    pop    %rbp
    ret
    
      

     
        
    int main(void)
    {
        int foo = 10;
        return foo;
    }
    
      

    produces:

     
        
    push   %rbp
    mov    %rsp,%rbp
    movl   $0xa,-0x4(%rbp) # Move 10 to stack variable
    mov    -0x4(%rbp),%eax # Return foo
    pop    %rbp
    ret
    
      
  • Deleted

    Permanently Deleted

    Jump
  • Unless your machine has error correcting memory. Then it will take literally forever.

  • Deleted

    Permanently Deleted

    Jump
  • Your CPU has big registers, so why not use them!

     
        
    #include <x86intrin.h>
    #include <stdio.h>
    
    static int increment_one(int input)
    {
        int __attribute__((aligned(32))) result[8]; 
        __m256i v = _mm256_set_epi32(0, 0, 0, 0, 0, 0, 1, input);
        v = (__m256i)_mm256_hadd_ps((__m256)v, (__m256)v);
        _mm256_store_si256((__m256i *)result, v);
        return *result;
    }
    
    int main(void)
    {
        int input = 19;
        printf("Input: %d, Incremented output: %d\n", input, increment_one(input));
        return 0;
    }
    
      
  • Imagine defending this guy. I will never understand people who like influencers.

  • That's literally what I'm saying; It's fine as long as there wasn't any unwritten data in the cache when the machine crashes/suddenly loses power. RAID controllers have a battery backed write cache for this reason, because traditional RAID5/6 has the same issue.