So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks.
Eh, Local LLMs don't really scale, you can't do much better than one person per one computer, unless it's really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren't currently on work laptops and VMs.
Sparks type machines will do better eventually but for now they're supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail's pace.
What's a government backstop, and does it happen often? It sounds like they're asking for a preemptive bail-out.
I checked the rest of Zitron's feed before posting and its weirder in context:
Interview:
She also hinted at a role for the US government "to backstop the guarantee that allows the financing to happen", but did not elaborate on how this would work.
Later at the jobsite:
I want to clarify my comments earlier today. OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word "backstop" and it mudlled the point.
She then proceeds to explain she just meant that the government 'should play its part'.
it often obfuscates from the real problems that exist and are harming people now.
I am firmly on the side of it's possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.
Yudkowsky and his ilk are cranks.
That Yud is the Neil Breen of AI is the best thing ever written about rationalism in a youtube comment.
this seems counterintuitive but... comments are the best, name of the function but longer are the worst. Plain text summary of a huge chunk of code that I really should have taken the time to break up instead of writing a novella about it are somewhere in the middle.
I feel a lot of bad comment practices are downstream of javascript relying on jsdoc to act like a real language.
Managers gonna manage, but having a term for bad code that works that is more palatable than 'amateur hour' isn't inherently bad imo.
Worst i've heard is some company forbidding LINQ in C#, which in python terms is forcing you to always use for-loops in place of filter/map/reduce and comprehensions and other stuff like pandas.groupby
My impression from reading the stuff posted here is that omarchy is a nothing project that's being aggressively astroturfed so a series of increasingly fashy contributors can gain clout and influence in the foss ecosystem.
Zitron catching strays in the comments for having too much of a bullying tone, I guess against billionaires and tech writers, and being too insistent on his opinion that the whole thing makes no financial sense. It's also lamented that the entire field of ML avoids bsky because it has a huge AI hostility problem.
Concern trolling notwithstanding, the eigenrobot stuff is worrisome though, if not specifically for him about how extremely online the ideological core of the administration seems to be, as close to the lunatics running the asylum as you'll get in a modern political setting.
I feel that if you are an USian who thinks that accepting US government contracts has become morally incorrect then fretting over swedish audio streaming companies is a waste of your time.
edit: free market solutionism as a response to having a dollarstore sturmabteilung running the streets in the USA just rubs me the wrong way. Sorry if the original post reads a bit coy, I just feel it would be incredibly cringe of me to make overt recommendations on how to handle things from the relative safety of living in a first world country on the other side of the world.
Come on, the AI wrote code that published his wallet key and then he straight up tweeted it in a screenshot, it's objectively funny/harrowing.
Also the thing with AI tooling isn't so much that it isn't used wisely as it is that you might get several constructive and helpful outputs followed by a very convincingly correct looking one that is in fact utterly catastrophic.
What else though, is he being secretly funded by the cabal to make convolutional neural networks great again?
That he found his niche and is trying to make the most of it seems by far the most parsimonious explanation, and the heaps of manure he unloads on the LLM both business and practices weekly surely can't be helping DoNotPay's bottom line.
Eh, Local LLMs don't really scale, you can't do much better than one person per one computer, unless it's really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren't currently on work laptops and VMs.
Sparks type machines will do better eventually but for now they're supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail's pace.