Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)I
Posts
0
Comments
127
Joined
2 yr. ago

  • Deleted

    Permanently Deleted

    Jump
  • Several years ago I created a Slack bot that ran something like Jupyter notebook in a container, and it would execute Python code that you sent to it and respond with the results. It worked in channels you invited it to as well as private messages, and if you edited your message with your code, it would edit its response to always match the latest input. It was a fun exercise to learn the Slack API, as well as create something non-trivial and marginally useful in that Slack environment. I knew the horrible security implications of such a bot, even with the Python environment containerized, and never considered opening it up outside of my own personal use.

    Looks like the AI companies have decided that exact architecture is perfectly safe and secure as long as you obfuscate the input pathway by having to go through a chat-bot. Brilliant.

  • “Generally, what happens to these wastes today is they go to a landfill, get dumped in a waterway, or they’re just spread on land,” said Vaulted Deep CEO Julia Reichelstein. “In all of those cases, they’re decomposing into CO2 and methane. That’s contributing to climate change.”

    Waste decomposition is part of the natural carbon cycle. Burning fossil fuels isn't. We should not be suppressing part of the natural cycle so we can supplant it with our own processes. This is Hollywood accounting applied to carbon emissions, and it's not going to solve anything.

  • A balloon full of helium has more mass than a balloon without helium, but less weight

    That's not true. A balloon full of helium has more mass and more weight than a balloon without helium. Weight is dependent only on the mass of the balloon+helium and the mass of the planet (Earth).

    The balloon full of helium displaces way more air than the balloon without helium since it is inflated. The volume of displaced air of the inflated balloon has more weight than the combined weight of the balloon and helium within, so it floats due to buoyancy from the atmosphere. Its weight is the same regardless of the medium it's in, but the net forces experienced by it are not.

  • He explicitly argues that “Qatanani is not part of ‘the people’ the First Amendment protects” and that non-citizens cannot “claim its protection.”

    His reasoning? A convoluted “originalist” argument claiming that because the First Amendment refers to “the people,” it only applies to those who are “part of a national community” with sufficient “allegiance” to the sovereign. Non-citizens, he argues, owe only “temporary allegiance” and therefore get only “temporary protection”—protection that can be withdrawn whenever the government decides they’ve become “dangerous.”

    This sounds like the judge fell out of a parallel universe. Is it typical to make up so many new, complex semantic constructs in a single opinion? A "national community" and some notion of membership in it. "Allegiance" to "the sovereign"? Sovereign what? Like the head of state, or a platonic ideal of the USA? And once "allegiance" is defined, there's now "temporary allegiance" that begets "temporary protection"?

    My understanding of legal matters is that judges typically pour over not just the wording and meaning of law, but also the wording and meaning of other judges' opinions and verdicts, and concepts like these are developed over many cases spanning decades or more. I'm really not usually one for conspiracy theories, but either this judge has the wrong job and should be writing tabletop RPG modules, or this has all been planned out, and he's been fed a path his verdicts are supposed to slowly trod, and he skipped ahead a few chapters.

  • I would honestly like to see a cut of just about any TV show or movie that uses stunt doubles where the doubles do both the lines and the action. I would like to see how different a director would shoot a scene if they weren't constrained to choosing angles and lighting to make it look like two different people were the same person.

  • At least it was better than Indiana Jones and the Kingdom of the Crystal Skull.

  • I don't follow Mexican politics closely, but this could be part of an effort to curb obesity. I've heard they introduced taxes on sugary drinks for this, so this might be another avenue.

    If people are wanting cheap snacks, and private companies are only making unhealthy ones, you can introduce regulations to micromanage what they can produce, or you can introduce a complex taxation process to disincentivize sugar snacks. Or you can introduce your own product that meets a perceived unmet demand in an underserved market.

  • The thing is it's been like that forever. Good products made by small- to medium-sized businesses have always attracted buyouts where the new owner basically converts the good reputation of the original into money through cutting corners, laying off critical workers, and other strategies that slowly (or quickly) make the product worse. Eventually the formerly good product gets bad enough there's space in the market for an entrepreneur to introduce a new good product, and the cycle repeats.

    I think what's different now is, since this has gone on unabated for 70+ years, economic inequality means the people with good ideas for products can't afford to become entrepreneurs anymore. The market openings are there, but the people that made everything so bad now have all the money. So the cycle is broken not by good products staying good, but by bad products having no replacements.

  • The technological progress LLMs represent has come to completion. They're a technological dead end. They have no practical application because of hallucinations, and hallucinations are baked into the very core of how they work. Any further progress will come from experts learning from the successes and failures of LLMs, abandoning them, and building entirely new AI systems.

    AI as a general field is not a dread end, and it will continue to improve. But we're nowhere near the AGI that tech CEOs are promising LLMs are so close to.

  • The first statement is not even wholly true. While training does take more, executing the model (called "inference") takes much, much more power than non-AI search algorithms, or really any traditional computational algorithm besides bogosort.

    Big Tech weren't doing the best they possibly could transitioning to green energy, but they were making substantial progress before LLMs exploded on the scene because the value proposition was there: traditional algorithms were efficient enough that the PR gain from doing the green energy transition offset the cost.

    Now Big Tech have for some reason decided that LLMs represent the biggest game of gambling ever. The first to find the breakthrough to AGI will win it all and completely take over all IT markets, so they need to consume as much as they can get away with to maximize the probability that that breakthrough happens by their engineers.

  • They keep tasking these LLMs with things that traditional programming solved a long time ago. There are already vending machines run by computers. They work just fine without AI.

    Honestly the computer controlled vending machines are already over-engineered since many of them play ads when you walk up. The last customer-focused feature added was credit card support, and that just needs a credit card reader and a minimal IoT integration. They really shouldn't even have screens.

  • Deleted

    Permanently Deleted

    Jump
  • Why are they... why are they having autocomplete recommend medical treatment? There are specialized AI algorithms that already exist for that purpose that do it far better (though still not well enough to even assist real doctors, much less replace them).

  • The change doesn’t reflect unprecedented temperatures, with Fairbanks having reached 90 degrees twice in 2024, Srinivasan said. It’s purely an administrative change by the weather service.

    I think this is a bit disingenuous. Sure, it's not technically "unprecedented" because it has happened before, specifically last year, but the change is because they want to better help people, and better helping people means making this change because hotter temperatures are happening more because of climate change.

    Thoman also clarified that the term swap doesn’t have anything to do with climate change.

    They may not be directly citing climate change, but it's absolutely the root cause. I wonder if they're just trying to stay under Trump's radar so he doesn't make them roll it back because they said the C phrase. In bad political times doing good sometimes means speaking the party line while doing good works behind their backs.

  • My point is that this kind of pseudo intelligence has never existed on Earth before, so evolution has had free reign to use language sophistication as a proxy for humanity and intelligence without encountering anything that would put selective pressure against this heuristic.

    Human language is old. Way older than the written word. Our brains have evolved specialized regions for language processing, so evolution has clearly had time to operate while language has existed.

    And LLMs are not the first sophisticated AI that's been around. We've had AI for decades, and really good AI for a while. But people don't anthropomorphize other kinds of AI nearly as much as LLMs. Sure, they ascribe some human like intelligence to any sophisticated technology, and some people in history have claimed some technology or another is alive/sentient. But with LLMs we're seeing a larger portion of the population believing that that we haven't seen in human behavior before.

  • My running theory is that human evolution developed a heuristic in our brains that associates language sophistication with general intelligence, and especially with humanity. The very fact that LLMs are so good at composing sophisticated sentences triggers this heuristic and makes people anthropomorphize them far more than other kinds of AI, so they ascribe more capability to them than evidence justifies.

    I actually think this may explain some earlier reporting of some weird behavior of AI researchers as well. I seem to recall reports of Google researchers believing they had created sentient AI (a quick search produced this article). The researcher was fooled by his own AI not because he drank the Koolaid, but because he fell prey to this neural heuristic that's in all of us.

  • The human brain is not an ordered, carefully engineered thinking machine; it's a massive hodge-podge of heuristic systems to solve a lot of different classes of problems, which makes sense when you remember it evolved over millions of years as our very distant ancestors were exposed to radically different environments and challenges.

    Likewise, however AGI is built, in order to communicate with humans and solve most of the same problems, it's probably going to take an amalgamation of different algorithms, just like brains.

    All of this to say, I agree memorization will probably be an integral part of that system, but it's also going to be a small part of the final system. So I also agree with the article that we're way off from AGI.

  • He looks more like he's thinking "Really? It got this far? Enough people thought this was a good idea that we're all here doing this photo shoot for the promotional image?"

    (I think the only part that looks like he's on the verge of crying is the reflection of the studio lights in his eyes look like extra moisture.)

  • "Almost half a dozen times" seems like a weird way to say 5.

  • I'm afraid there's a typo in your title. It's "a two-hoo."