Just seen a clip of aronofsky’s genai revolutionary war thing and it is incredibly bad. Just… every detail is shit. Ways in which I hadn’t previously imagined that the uncanny valley would intrude. Even if it weren’t for the simulated flesh golems, one of whom seems to be wearing anthony hopkins’ skin as a clumsy disguise, the framing and pacing just feels like the model was trained on endless adverts and corporate speaking head videos, and either it was impossible to edit, or none the crew have any idea what even mediocre films look like.
I also hadn’t appreciated before that genai lip sync/dubbing was just embarrassing. I think I’ve only seen a couple of very short genai video clips before, and the most recent at least 6 months ago, but this just seems straight up broken. Have the people funding this stuff ever looked at what is being generated?
“AI blunder in Aurskog-Høland [Norway] – children received water bills”
The sources linked are all in norwegian, so you’ll have to translate them yourself if you’re interested, but Patricia’s summary seems reasonable. The government authority in question had to hire extra people to undo the mess that the ai system caused. There’s a commercial vendor involved somewhere, but if they were named I didn’t spot it.
I have mixed feelings about this one: The Enclosure feedback loop (or how LLMs sabotage existing programming practices by privatizing a public good).
The author is right that stack overflow has basically shrivelled up and died, and that llm vendors are trying to replace it with private sources of data they'll never freely share with the rest of us, but I don’t think that chatbot dev sessions are in any way “high quality data”. The number of occasions when a chatbot-user actually introduces genuinely useful and novel information will be low, and the ability of chatbot companies to even detect that circumstance will be lower still. It isn’t enclosing valuable commons, it is squirting sealant around all the doors so the automated fart-huffing system and its audience can’t get any fresh air.
Techbro leaves suspicious package unattended at davos, gets carted off by the police, swiss security folk mock his technical ignorance.
In the morning, Heyneman was asked to explain his device to a Swiss government technical expert named Chris (he didn’t catch the last name).
“I give him the same pitch that I gave all the business people in Davos,” Heyneman said. When Chris drilled him on his code, Heyneman admitted that he had used Cursor and Claude Code to vibe code the entire thing. Chris then took it upon himself to explain the code to Heyneman, line by line.
I was of the opinion that worrying about getting a radio license because it would get your name on a government list was a bit pointless… amateur radio is largely last century technology, and there are so many better ways to communicate with spies these days, and actual spies with radios wouldn’t be advertising them, and that governments and militaries would have better things to do than care about your retro hobby.
Propagandists presented the Belarusian Federation of Radioamateurs and Radiosportsmen (BFRR) as nothing more than a front for a “massive spy network” designed to “pump state secrets from the air.” While these individuals were singled out for public shaming, we do not know the true scale of this operation. Propagandists claim that over fifty people have already been detained and more than five hundred units of radio equipment have been seized.
The charges they face are staggering. These men have been indicted for High Treason and Espionage. Under the Belarusian Criminal Code, these charges carry sentences of life imprisonment or even the death penalty.
I’ve not been able to verify this yet, but once again I find myself grossly underestimating just how petty and stupid a state can be.
Ahh. I’d seen a bunch of people pointedly avoiding things he’d worked on and was working with, but no one actually said why so I was assuming it was llm related. No such luck, I guess… the old missing stair strikes again.
Ahh, i knew there was a recent catastrophe involving people handing credentials and confidential information to third parties without a single thought or qualm, but couldn’t for the life of me remember what it was. Thanks!
So, there’s a kind of security investigation called “dorking”, where you use handy public search tools to find particularly careless software misconfigurations that get indexed by eg. google. One too, for that sort of searching it github code search.
Turns out that a) claude chat logs get automatically saved to a file under .claude/logs and b) quite a lot of people don’t actually check what they’re adding to source control, and you can actually search github for that sort of thing with a path: code search query (though you probably need to be signed in to github first, it isn’t completely open).
I didn’t find anything even remotely interesting (and watching people’s private project manager fantasy roleplay isn’t something I enjoy), but viss says they’ve found credentials, which is fun.
Given that openai is now a precedent for removing the pb figleaf from a pbc, I’m assuming everyone will be doing it now and it’ll just become another part of the regular grift.
Armin Ronacher, who is an experienced software dev with a fair amount of open and less open source projects under his belt, was up until fairly recently a keen user of llm coding tools. (he’s also the founder of “earendil”, a pro-ai software pbc, and any company with a name from tolkien’s legendarium deserves suspicion these days)
He’s not using psychosis in the sense of people who have actually developed serious mental health issues as a result of chatbot use, but software developers who seem to have lost touch with what they were originally trying to and just kind a roll around in the slop, mistaking it for productivity.
When Peter first got me hooked on Claude, I did not sleep. I spent two months excessively prompting the thing and wasting tokens. I ended up building and building and creating a ton of tools I did not end up using much. “You can just do things” was what was on my mind all the time but it took quite a bit longer to realize that just because you can, you might not want to. It became so easy to build something and in comparison it became much harder to actually use it or polish it. Quite a few of the tools I built I felt really great about, just to realize that I did not actually use them or they did not end up working as I thought they would.
You feel productive, you feel like everything is amazing, and if you hang out just with people that are into that stuff too, without any checks, you go deeper and deeper into the belief that this all makes perfect sense. You can build entire projects without any real reality check. But it’s decoupled from any external validation. For as long as nobody looks under the hood, you’re good. But when an outsider first pokes at it, it looks pretty crazy.
He’s still pro-ai, and seems to be vaguely hoping that improvements in tooling and dev culture will help stem the tide of worthless slop prs that are drowning every large open source project out there, but he has no actual idea if any of that can or will happen (which it won’t, of course, but faith takes a while to fade).
As always though, the first step is to realise you have a problem.
I’ve thought about jolla, but I’m not particularly interested right now. Their security is unlikely to be anything like as good as ios or graphene, software availability is poor, the hardware quality appears to be ok at best, and so on.
I’m considering various alternative devices, but if it’s effectively a “vanilla smartphone only slightly worse” it doesn’t really appeal to me. If they’d built a modern n900, on the other hand…
This is fun: a zero-click android exploit that allows arbitrary code execution and privilege escalation. Y’know, the worst kind. How did we get here?
Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user. One such feature is audio transcription. Incoming SMS and RCS audio attachments received by Google Messages are now automatically decoded with no user interaction. As a result, audio decoders are now in the 0-click attack surface of most Android phones.
Every now and then, I think about going back to android, and then I read stuff like this. FWIW, iOS had a closely related bug, but compiled the offending code with bounds checks, so it wasn’t usefully exploitable (and required some user interaction, too).
Anyway, if you do android, maybe check if automatic transcription is enabled.
Blacksky has delivered on bluesky’s promise of federation by setting up their own app view, creating a complete and independent third party implementation.
Mcc has an interesting thread on mastodon (https://mastodon.social/@mcc/115918042095581428) which asks a bunch of questions about what the actual consequences of this might be, and no-one really seems to know, but no-one has much faith in the engineering or moderation chops of the bluesky team.
It looks like bluesky is somewhat vulnerable to rich trolls, because the main barrier to entry is cost… blacksky has budget of maybe 80000 usd/year (https://opencollective.com/blacksky) which is well within the reach of a whole bunch of people prepared to spend money to be egregious assholes, especially if they already have access to suitable talent and equipment. It’ll be bleakly interesting to see who tries this first.
Turns out that if you stuff the right shaped bytes into png image tEXt chunks (which don’t get compressed), the base64 encoded form of that image has sections that look like human readable text.
What are the implications?
Nothing! This was just for fun after a discussion with a colleague whether it might be even possible to make base64 blobs look readable. There's certainly no poorly coded systems out there which might be hooked up to read emails or webpages and interpret any text they see as information.
No siree I'm sure everyone is keeping the attachments and the content well and truly isolated from each other and this couldn't possibly do anything other than be a fun proof of concept and excuse for me to play with wasm.
Just seen a clip of aronofsky’s genai revolutionary war thing and it is incredibly bad. Just… every detail is shit. Ways in which I hadn’t previously imagined that the uncanny valley would intrude. Even if it weren’t for the simulated flesh golems, one of whom seems to be wearing anthony hopkins’ skin as a clumsy disguise, the framing and pacing just feels like the model was trained on endless adverts and corporate speaking head videos, and either it was impossible to edit, or none the crew have any idea what even mediocre films look like.
I also hadn’t appreciated before that genai lip sync/dubbing was just embarrassing. I think I’ve only seen a couple of very short genai video clips before, and the most recent at least 6 months ago, but this just seems straight up broken. Have the people funding this stuff ever looked at what is being generated?
https://bsky.app/profile/ethangach.bsky.social/post/3mdljt2wdcs2v