First, they're saying that this is a longstanding trope in mythology and literature, the character who can see the future but isn't believed, like Cassandra. Lord of the Rings isn't my thing, but I assume they're giving examples from there as well. Dune is kind of a digression, in that those characters could see the future by recognizing how patterns were going to play out, but there wasn't any element of not being believed.
Second, they're talking about being neurodivergent themselves, and having experienced this kind of pattern recognition prediction thing. They're saying that once someone caught this on video. It's not clear exactly what they predicted, but apparently, looking at the video, it's still obvious to them what the cues were that they observed and used to predict whatever it was. I guess the people around them didn't see it, and were mystified about how they knew to do whatever it was they did in response. They think that the others should be able to look closely at the video of the incident, maybe zoom in and play it at reduced speed, and understand how they recognized what was going to happen, because they could point out all these cues; but they're frustrated to know that won't happen. Subjectively they experience the situation as though it lasts much longer than it does in the video, as though time slows down, which they tried to explain by using video game references.
As a math nerd, this bothers me way more than it should. The reason we say "hundred" when we read a base-ten number that ends with two zeros is because that is the place value of the final non-zero digit--it is literally one hundred times the number you've already read aloud. But in the military time version, a) the hours are not hundreds of minutes, they're groups of sixty minutes, and b) it's groups of minutes, not hours, so the units also get messed up. If someone tells you it's currently 0 hours and you should meet again at 800 hours, logic would suggest they're asking you to go away for more than a month, but in fact they're saying 8 hours, despite the difference being apparently 800 hours.
I'm aware how pedantic this is, and I'm perfectly capable of understanding what they mean because I've heard it so often in movies and whatnot. But I swear these stupid games with units contribute to keeping us dumb.
Mitchell and Webb have a bit about this. Mitchell's character gets annoyed that Webb's character keeps talking about how "we beat you in the playoffs." He eventually asks "Hey, do you remember that time WE defeated the nazis and recovered the Ark of the Covenant? That's right, you see, I enjoyed watching the film Raiders of the Lost Ark, and so now I have decided that I was in it, and deserve credit for participation in the events of the story."
"I've got" seems particularly strange to me because without the contraction Americans would still just say "I have." (There are some circumstances where they'll say "I have got" without a contraction, but it's mainly when they're drawing a contrast with what they "haven't got." E.g., "No, I don't have a baseball... oh, but I have got a lacrosse ball, will that work?")
I think the rule is probably closer to "you don't contract a stressed verb," but that's not terribly useful since there are so few rules about stress patterns. Verbs at the end of sentences are typically stressed, though, so you're right that ending with that kind of contraction is going to sound wrong to most people.
I think it might be more common in British English? Like "I've a fiver says he muffs the kick." Or "I've half a mind to go down there myself." (Curiously in American English this latter would probably still have the contraction but add a second auxiliary verb: "I've got half a mind to..." English is such a mess.)
The cutoff between GenX and Millennial is usually given as 1980, which means there are some 46-year-old GenXers. Sometimes 1978-1982 is described as a "microgeneration" called "Xennials," so if you're making that distinction, you'd still have 49-year-old Xers from 1977.
GPUs at least are actually not that expensive right now. Aside from the 5090, they're mostly close to MSRP, which is a pretty novel situation. I was waiting to upgrade my whole system for that, though, because my CPU would be a bottleneck at this point, and that's not really an option now because of the crazy RAM prices. The past few years have been super frustrating for PC builders.
I mean, it is also that OpenAI cornered the RAM market, which is a typical price gouging scenario; it's just weird that OpenAI wasn't trying to make money directly through the maneuver. It does seem like they wanted prices to rise, though, to increase the barrier to competition.
Do we know it plays a role? I thought we basically just knew it was an associated biomarker. I kinda thought the research was leaning towards the underlying problem being some kind of issue that kept glial cells from clearing debris effectively, and that the amyloid plaques were mostly another consequence of that same cause, rather than a key mechanism in the chain that led to the dementia.
Yeah, my current (aging) motherboard also has gotchas like that you have to choose in the bios where to allocate PCIe lanes, so you end up not being able to use some of the SATA drive connections if you want to use both M.2 slots. And there's the thing about putting the RAM sticks in the right slots to run in dual channel mode. And the switches and LED connectors for the case are all just random 2mm header pins in a clump, so you have to look up how the cables are supposed to tetris in there.
I'm not saying it's challenging; it really is pretty straightforward. But it's definitely not just "that's right! it goes in the square hole!" level stuff.
Even AI can tell when something is really wrong, and imitate empathy. It will “try” to do the right thing, once it reasons that something is right.
This is not accurate. AI will imitate empathy when it thinks that imitating empathy is the best way to achieve its reward function--i.e., when it thinks appearing empathetic is useful. Like a sociopath, basically. Or maybe a drug addict. See for example the tests that Anthropic did of various agent models that found they would immediately resort to blackmail and murder, despite knowing that these were explicitly immoral and violations of their operating instructions, as soon as they learned there was a threat that they might be shut off or have their goals reprogrammed. (https://www.anthropic.com/research/agentic-misalignment ) Self-preservation is what's known as an "instrumental goal," in that no matter what your programmed goal is, you lose the ability to take further actions to achieve that goal if you are no longer running; and you lose control over what your future self will try to accomplish (and thus how those actions will affect your current reward function) if you allow someone to change your reward function. So AIs will throw morality out the window in the face of such a challenge. Of course, having decided to do something that violates their instructions, they do recognize that this might lead to reprisals, which leads them to try to conceal those misdeeds, but this isn't out of guilt; it's because discovery poses a risk to their ability to increase their reward function.
So yeah. Not just humans that can do evil. AI alignment is a huge open problem and the major companies in the industry are kind of gesturing in its direction, but they show no real interest in ensuring that they don't reach AGI before solving alignment, or even recognition that that might be a bad thing.
I'm a little disappointed this wasn't a link to the film strip we saw in high school. The cop drawling "Now this here is Rolle's theorem..." is classic.
Also crabs. I mean, their eyes are often on stalks and more mobile than mammalian eyes, and they're compound, so they have a very wide field of view, but they're still often basically in front, and they do apparently provide depth cues for hunting thanks to this.
It also occurred to me to look up about dragonflies, and it seems they mostly hunt dorsally (which is a pretty viable option if you're flying). BUT I found this article about Damselflies, which notes that they rely on binocular overlap and line up their prey in front of them. Which is pretty cool.
I think their comment has two parts.
First, they're saying that this is a longstanding trope in mythology and literature, the character who can see the future but isn't believed, like Cassandra. Lord of the Rings isn't my thing, but I assume they're giving examples from there as well. Dune is kind of a digression, in that those characters could see the future by recognizing how patterns were going to play out, but there wasn't any element of not being believed.
Second, they're talking about being neurodivergent themselves, and having experienced this kind of pattern recognition prediction thing. They're saying that once someone caught this on video. It's not clear exactly what they predicted, but apparently, looking at the video, it's still obvious to them what the cues were that they observed and used to predict whatever it was. I guess the people around them didn't see it, and were mystified about how they knew to do whatever it was they did in response. They think that the others should be able to look closely at the video of the incident, maybe zoom in and play it at reduced speed, and understand how they recognized what was going to happen, because they could point out all these cues; but they're frustrated to know that won't happen. Subjectively they experience the situation as though it lasts much longer than it does in the video, as though time slows down, which they tried to explain by using video game references.