Just double checked, and no they are very much talking about LLM's. Specifically they were testing gpt-4o, gemini-1.5, llama-3.1, sonnet-3.5, and opus-3 o1. https://arxiv.org/pdf/2412.04984 And the concerns raised in that paper are legit, but not indicative of consciousness or intent.
It's wildly difficult to control the output of the black box and that's hardly llms showing signs of self-preservation. These cries are from people in the industry trying to pretend the models are something that they are not, and cannot ever be. I do agree with the sentiment that we should be prepared to pull the plug on them though, for other reasons.
Coward. It absolutely was, and is, about politics. The misinformation has ALWAYS been political. Countering misinformation is inherently political. As someone who's very existence is considered "political" I politely suggest we drop the apolitical angle and start getting fucking angry and aggressively political. I'm very glad there are organizations out there like his that are doing that work. Just be honest about it.
The title is also weirdly phrased to make it sound like science was wrong. Of course science was wrong. The whole process is based on realizing that our past assumptions were wrong. Every time scientists discover something new, it replaces an old incorrect assumption. These sorts of titles are how you get to the "Mainstream media/science is bogus" track.
of AI and telemetry in Windows 11