Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)N
Posts
1
Comments
98
Joined
12 mo. ago

  • “AI” image and video generation is soulless, ugly, and worthless. It isn’t art; it is divorced from the human experience. It is incredibly harmful to the environment. It is used to displace art & artists and replace them with garbage filler content that sucks even more of the joy from this world. Just incredibly wasteful and aesthetically insulting.

    I think these critiques apply to “GenAI” more broadly, too. LLMs in particular are hot garbage. They are unreliable but with no easy way to verify what is or isn’t accurate, so people fully buy into misinformation created by these things. They also get treated as a source of truth or authority, meanwhile the types of responses you get are literally tailor made to suit the needs of the organization doing the training by their training data set, input and activation functions, and the type of reinforcement learning they performed. This leads to people treating output from an LLM as authoritative truth, while it is just parroting the biases of the human text in its training data. The can’t do anything truly novel; they remix and add error to their training data in statistically nice ways. Not to mention they steal the labor of the working class in an attempt to mimic and replace it (poorly), they vacuum up private user data at unprecedented rates, and they are destroying the environment at every step in the process. To top it all off, people are cognitively offloading to these the same way they did for reliable tech in the past, but due to hallucinations and general unreliability those doing this are actively becoming less intelligent.

    My closing thought is that “GenAI” is a massive bubble waiting to burst, and this tech won’t be going anywhere but it won’t be nearly as accessible after that happens. Companies right now are dumping tens or hundreds of billions a year into training and inferencing these models, but with annual revenues in the hundreds of millions for these sectors. It’s entirely unsustainable, and they’re all just racing to bleed the next guy white so they can be the last one standing to collect all the (potential future) profits. The cost of tokens for an LLM are rising, despite the marketing teams claiming the opposite when they put old models on steep discount while raising prices on the new ones. The number of tokens needed per prompt are also going up drastically with the “thinking”/“reasoning” approach that’s become popular. Training costs are rising with diminishing returns due to lack of new data and poor quality generated data getting fed back in (risking model collapse). The costs will only go up more and more quickly, and with nothing to show for it. All of this for something which you’re going to need to review and edit anyway to ensure any standard of accuracy, so you may as well have just done the work yourself and been better off financially and mentally.

  • We WILL sanction ourselves again so help me god don’t make me do it!!

  • Yeah all of these companies are spending vastly more than the revenue they pull in, it’s like a 10x discrepancy between actual cost and what they’re charging to try to sink competitors rn. And it’s only going up

  • That’s not the case for the newer open source drivers from nvidia. They’re only compatible with the last few generations of cards but they’re performant and the only feature they lack is CUDA to my knowledge. Not talking nouveau here

  • When I’m forced to use windows it’s the LTSC IOT version with telemetry disabled via group policy and a local account. I run O&O shut up after that, then install portmaster. I don’t run it as a daily OS but I think that’s private enough for my limited use case. My only other random recommendations are using either scoop or winget for package management, and komorebi with whkd for tiling window management.

  • Haskell mentioned λ 💪 λ 💪 λ 💪 λ 💪 λ

  • Diatomic earth can be pretty good at stopping bugs without pesticides

  • The machine learning models which came about before LLMs were often smaller in scope but much more competent. E.g. image recognition models, something newer broad “multimodal” models struggle with; theorem provers and other symbolic AI applications, another area LLMs struggle with.

    The modern crop of LLMs are juiced up autocorrect. They are finding the statistically most likely next token and spitting it out based on training data. They don’t create novel thoughts or logic, just regurgitate from their slurry of training data. The human brain does not work anything like this. LLMs are not modeled on any organic system, just on what some ML/AI researchers assumed was the structure of a brain. When we “hallucinate logic” it’s part of a process of envisioning abstract representations of our world and reasoning through different outcomes; when an LLM hallucinates it is just creating what its training dictates is a likely answer.

    This doesn’t mean ML doesn’t have a broad variety of applications but LLMs have gotta be one of the weakest in terms of actually shifting paradigms. Source: software engineer who works with neural nets with academic background in computational math and statistical analysis

  • I’m moving right now and can confirm. We have a two week gap between leases so we’re also lacking permanent shelter and just couch surfing for the time being. Shit is terrible

  • It’s such clickbait, because even something like an abacus is indeed a digital computer. But still not the most remarkable bit of technology even in ancient cultures

  • Reminds me of how LLMs being used for programming might be told “write a class that passes these unit tests” and respond by just hardcoding the values being checked in the tests. Just absurd “solutions” to the tasks they’re asked to hallucinate about

  • Gopher and I2P as well

  • If you ever want a simple neovim config I’d happily give you something to start with for your use case

    The real difficulty comes in learning the keybinds and built in features lol

  • I’m in the tech/programming space and a friend of mine asked me just last night if I’d started using LLM agents in my workflow yet (zorbo). Of course I haven’t, but I’ve tried them. I asked how he deals with the hallucinations, security holes, and unreliability. He told me he just regenerates sections of code until it works well enough then hand edits it to solve the inherent issues. I was like surely that’s insanely expensive to do??? Those API costs are STEEP. And he agreed it would be if his work didn’t cover all of the costs

    In short his job is paying more money for him to accomplish a similar amount in a roundabout way but with greater risk of security issues and lack of maintainability. This is the hyper efficient SWE revolution that’s threatening to replace us lol

  • I use pfetch-rs in new terminal sessions to add a little bit of decor. It doesn’t do anything but look nice, I just added some custom ascii art and it shows some specs

  • You can make custom images of this with some software called BlueBuild. I base mine off of the SecureBlue project then tweak it for my needs

  • It was always just to save money and pad the profit margins

  • Oh no not the plagiarism machine however would we recover???

    Please fail and die openai thx

    Also copyright is bullshit and IP shouldn’t exist especially for corporate entities. Free sharing of human knowledge and creativity should be a right. Machine plagiarism to create uninspired mimicries isn’t a necessary part of that process and should be regulated heavily