• 0 Posts
  • 45 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • I find it incredibly cringe anyway that all these parties just copy the same slogan. Some weird form of international nationalism, where they all just copy whatever the others are doing. It apparently works very well.

    I guess in Western Europe it’s largely focused around anti-immigration and anti-EU sentiments, but with this movement being more and more international, I do notice an uptick in rhetoric concerning sexual minorities and women’s rights, with a lot of anti science and elitism/wokeism sprinkled in. It’s very scary. I’m happy that we don’t have a political system where the winner takes it all in my country, as it’s pretty bad already as it is right now.


  • Some features/plugins can be quite taxing on the system and in extreme cases it can slow the editor down to the point of being unusable. I’m a happy Neovim user with a LazyVim setup, but I experience this extreme slowdown for some JSON files and I haven’t looked into it yet to see what causes it.

    You can let your editor do the same compute intensive or memory hogging things that a GUI editor does. The fact that it runs in your terminal doesn’t make it lightweight by definition.


  • oktoberpaard@feddit.nltoADHD@lemmy.worldHeya guys.
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    A general remark rather than an answer to your question: in general, more than not equal better with vitamins, and too much vitamin B6 can lead to nerve problems. I would only supplement when you have a good reason to believe that you’re deficient. If you insist, at least try to stick to normal doses.







  • oktoberpaard@feddit.nltoesp@lemm.eeFuturo del la comunidad
    link
    fedilink
    Español
    arrow-up
    2
    ·
    3 months ago

    Es algo muy curioso y me lo he preguntado muchas veces. Hay muchísimos hispanohablantes, pero donde están? Sobretodo los de España. Yo no soy español así que no sé dónde se puede encontrar españoles en internet. Sé que hay algunos foros que son exclusivamente en español, como foro coches y menéame, pero esos me gustan tanto. Todos están allí? Incluso en reddit hay menos de lo que esperaba.


  • Millennial here. I’ve been consuming Reddit, and now Lemmy, almost exclusively on my phone and for me it’s card view all the way. Often the graphic content is more important than the title and opening posts only to find out it’s not funny or interesting feels like a waste of time. Only when I find a post interesting enough that I want to comment or see the comments, I open it. Instances or communities that I don’t like go on the blocklist.

    If I really need to use Reddit, I open old Reddit in the browser with an extension that turns it into a mobile friendly site with card view. The new design has always felt sluggish and bloated to me, but not because of the card view.


  • Exactly. Same as with sleeping data. When it says that you’ve been awake 3 times last night, it doesn’t really mean much. That kind of data shouldn’t be presented as being accurate. However, it could still be made accessible behind a button er menu option. For example, it might show you that the signal is intermittent because your watch band isn’t tight enough, or other anomalies. And of course you’re right: they won’t tell you that the data is of low quality and as a user you don’t necessarily know that, so in that sense it can be very misleading.










  • Sure, but I’m just playing around with small quantized models on my laptop with integrated graphics and the RAM was insanely cheap. It just interests me what LLMs are capable of that can be run on such hardware. For example, llama 3.2 3B only needs about 3.5 GB of RAM, runs at about 10 tokens per second and while it’s in no way comparable to the LLMs that I use for my day to day tasks, it doesn’t seem to be that bad. Llama 3.1 8B runs at about half that speed, which is a bit slow, but still bearable. Anything bigger than that is too slow to be useful, but still interesting to try for comparison.

    I’ve got an old desktop with a pretty decent GPU in it with 24 GB of VRAM, but it’s collecting dust. It’s noisy and power hungry (older generation dual socket Intel Xeon) and still incapable of running large LLMs without additional GPUs. Even if it were capable, I wouldn’t want it to be turned on all the time due to the noise and heat in my home office, so I’ve not even tried running anything on it yet.