Michael Hendricks, a professor of neurobiology at McGill, said: “Rich people who are fascinated with these dumb transhumanist ideas” are muddying public understanding of the potential of neurotechnology. “Neuralink is doing legitimate technology development for neuroscience, and then Elon Musk comes along and starts talking about telepathy and stuff.”
Fun article.
Altman, though quieter on the subject, has blogged about the impending “merge” between humans and machines – which he suggested would either through genetic engineering or plugging “an electrode into the brain”.
Occasionally I feel that Altman may be plugged into something that's even dumber and more under the radar than vanilla rationalism.
What if quantum but magically more achievable at nearly current technology levels. Instead of qbits they have pbits (probabilistic bits, apparently) and this is supposed to help you fit more compute in the same data center.
Also they like to use the word thermodynamic a lot to describe the (proposed) hardware.
As far as I can tell there's absolutely no ideology in the original transformers paper, what a baffling way to describe it.
James Watson was also a cunt, but calling "Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid" one of the founding texts of eugenicist ideology or whatever would be just dumb.
So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks.
Eh, Local LLMs don't really scale, you can't do much better than one person per one computer, unless it's really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren't currently on work laptops and VMs.
Sparks type machines will do better eventually but for now they're supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail's pace.
What's a government backstop, and does it happen often? It sounds like they're asking for a preemptive bail-out.
I checked the rest of Zitron's feed before posting and its weirder in context:
Interview:
She also hinted at a role for the US government "to backstop the guarantee that allows the financing to happen", but did not elaborate on how this would work.
Later at the jobsite:
I want to clarify my comments earlier today. OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word "backstop" and it mudlled the point.
She then proceeds to explain she just meant that the government 'should play its part'.
it often obfuscates from the real problems that exist and are harming people now.
I am firmly on the side of it's possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.
Yudkowsky and his ilk are cranks.
That Yud is the Neil Breen of AI is the best thing ever written about rationalism in a youtube comment.
this seems counterintuitive but... comments are the best, name of the function but longer are the worst. Plain text summary of a huge chunk of code that I really should have taken the time to break up instead of writing a novella about it are somewhere in the middle.
I feel a lot of bad comment practices are downstream of javascript relying on jsdoc to act like a real language.
Managers gonna manage, but having a term for bad code that works that is more palatable than 'amateur hour' isn't inherently bad imo.
Worst i've heard is some company forbidding LINQ in C#, which in python terms is forcing you to always use for-loops in place of filter/map/reduce and comprehensions and other stuff like pandas.groupby
My impression from reading the stuff posted here is that omarchy is a nothing project that's being aggressively astroturfed so a series of increasingly fashy contributors can gain clout and influence in the foss ecosystem.
Zitron catching strays in the comments for having too much of a bullying tone, I guess against billionaires and tech writers, and being too insistent on his opinion that the whole thing makes no financial sense. It's also lamented that the entire field of ML avoids bsky because it has a huge AI hostility problem.
Concern trolling notwithstanding, the eigenrobot stuff is worrisome though, if not specifically for him about how extremely online the ideological core of the administration seems to be, as close to the lunatics running the asylum as you'll get in a modern political setting.
Fun article.
Occasionally I feel that Altman may be plugged into something that's even dumber and more under the radar than vanilla rationalism.