Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)V
Posts
0
Comments
197
Joined
10 mo. ago

  • Deleted

    Permanently Deleted

    Jump
  • Deleted

    Permanently Deleted

    Jump
  • I am running Wayland on an IVB GT1. Your hardware is not possibly shittier than this AND capable of handling modern tasks. Also wayland just needs the infrastructure of doing accelerated draws which if your GPU doesn't support then it won't work with X anyway unless you're running truly exotic 2D accelerators from the 90s

  • Deleted

    Permanently Deleted

    Jump
  • That's complete horseshit. There are lile 3 major implementations of Wayland and 2 exist because the other one wasn't ready at the time. There are other hobby implementations, but they all work together. Just like how different network stacks can all talk TCP to each other and be fine. Nobody calls TCP fragmented because there are different network stacks...

    There are also smaller projects.

    Also, the model of a protocol allows Wayland to be deployed on truly exotic operating systems. As long as the top level is compliant, shit just works.

  • It won't matter in the end. Their shitty Colombus Epoch is coming to an end

  • Yeah TRVE, making a point of intentionally being dumb usually means you're an insufferable cvnt

  • This is literally, actually a bond villain plot

  • Kekw we'll see

  • X11 has a shitload of unwanted and unused features that your favorite X11 compositor is actively fighting AGAINST to render your GUI.

    I implore you to pick up the X.Org source code and your favorite X11 shitshow's source code and realize why Wayland follows the same paradigms that apple adopted in 2001 and Microsoft in 2006.

  • Yes, the insane old ways are being phased out for a reason. Sorry that we don't keep the world in a heavily romanized version of 2003 forever.

  • If you're running Linux, this doesn't affect you in any way.

  • Yes, them not saying anything why they support that hiterlite scum is quite concerning. I think the CEO is just mad he got caught trying to startup a Nazi bar

  • "You can't call everything racism!"

    looks inside

    Blatant Racism

    mfw europeans are hitlerite

  • I don't think we should work with scum like DHH and vaxry just because some asshole lib might accuse us of purity tests

    If "not working with people who are maniacs who want you dead" is a purity test I'm dusting off my Inquisition book

  • Because as much as they're ridiculed today by libcucks of OSS, FSF was a formidable force of software once. At some point in history literally the only way to avoid paying absolutely insane manufacturer license fees for things like compilers was using GNU tools.

    If they put their ass into it, they can pull it off tbh

  • Ohhhhhhhb lmfao you're right

  • OnlyOffice sees little to no dev time and it is insanely behind LO in terms of development and features, please consider using LO for your own sake

    Guys this comment is wrong I was thinking of OpenOffice

  • Wood is dogshit bro fuck you smoking

  • Deleted

    Permanently Deleted

    Jump
  • One of the absolute best uses for LLMs is to generate quick summaries for massive data. It is pretty much the only use case where, if the model doesn't overflow and become incoherent immediately [1], it is extremely useful.

    But nooooo, this is luddite.ml saying anything good about AI gets you burnt at the stake

    Some of y'all would've lit the fire under Jan Hus if you lived in the 15th century

    [1] This is more of a concern for local models with smaller parameter counts and running quantized. For premier models it's not really much of a concern.

  • Deleted

    Permanently Deleted

    Jump
  • That is different. It's because you're interacting with token-based models. There has been new research on giving byte level data to LLMs to solve this issue.

    The numerical calculation aspect of LLMs and this are different.

    It would be best to couple an LLM into a tool-calling system for rudimentary numeral calculations. Right now the only way to do that is to cook up a Python script with HF transformers and a finetuned model, I am not aware of any commercial model doing this. (And this is not what Microshit is doing)