Skip Navigation

Posts
7
Comments
1410
Joined
3 yr. ago

  • I don't hate this article, but I'd rather have read a blog post grounded in the author's personal experience engaging with a personalized AI assistant. She clearly has her own opinions about how they should work, but instead of being about that there's this attempt to make it sound like there's a lot of objective certainty to it that falls flat because of failing to draw a strong connection.

    Like this part:

    Research in cognitive and developmental psychology shows that stepping outside one’s comfort zone is essential for growth, resilience, and adaptation. Yet, infinite-memory LLM systems, much like personalization algorithms, are engineered explicitly for comfort. They wrap users in a cocoon of sameness by continuously repeating familiar conversational patterns, reinforcing existing user preferences and biases, and avoiding content or ideas that might challenge or discomfort the user.

    While this engineered comfort may boost short-term satisfaction, its long-term effects are troubling. It replaces the discomfort necessary for cognitive growth with repetitive familiarity, effectively transforming your cognitive gym into a lazy river. Rather than stretching cognitive and emotional capacities, infinite-memory systems risk stagnating them, creating a psychological landscape devoid of intellectual curiosity and resilience.

    So, how do we break free from this? If the risks of infinite memory are clear, the path forward must be just as intentional.

    Some hard evidence that stepping out of your comfort zone is good, but not really any that preventing stepping out of their comfort zone is in practice the effect that "infinite memory" features of personal AI assistants has on people, just rhetorical speculation.

    Which is a shame because how that affects people is pretty interesting to me. The idea of using a LLM with these features always freaked me out a bit and I quit using ChatGPT before they were implemented, but I want to know how it's going for the people that didn't, and who use it for stuff like the given example of picking a restaurant to eat at.

  • There's at least some difference between "have been" and "this is currently likely to happen", since if it's known then it would have been fixed. I've gotten viruses before from just visiting websites but it was decades ago and there's no way the same method would work now.

  • Nice to see someone actually trying it themselves to do their own analysis despite having reservations

  • Stuff like this makes me wonder, at what point is it bad enough that the truisms about leaving medical advice to licensed healthcare professionals become wrong, and everyone would be better off turning to anything else instead of engaging with the system? Are we not there yet? How much further would there be to go?

  • Ramble about something for long enough that people should be able to tell is how I do it.

  • It's possible, but I've followed some public comment processes for regulatory stuff before and large volumes of comments make it take way longer, because there is manual work involved. If a politician wants to still have actual people manually consider the contents of their inbox (which they absolutely should), using AI instead of a form letter will make that much harder for them to do. AI talking to AI to determine what the public thinks and wants is probably going to lose a lot in translation, and if it's using service-based AI will give the companies running it another rather direct way to influence political outcomes.

    Given all that, I'm not sure what the advantage is to balance against it either. As opposed to sending a copy of the form letter, where you can assume they will at least count how many people have done that, what's even the benefit of having a LLM rewrite it first?

  • This is where I learned that there is in fact a word that kind of rhymes with orange

  • Well, the person you responded to above was talking about sending more than one, which is the worst part. But even if you are only using AI to rephrase the canned response for your singular comment, that creates a situation where it is more difficult for them to actually read and consider different points people might be bringing up, because now there's lots of messages that are basically just the canned response in content and intent but more effort to group together. Also the people going through them will probably be able to tell AI is being used, which could call into question whether someone was sending more than one even if you were not.

  • One good thing about them is that if you have a cat, they are less likely to get destroyed than the other type of blinds.

  • I don't hate AI and think it's fine to use for a bunch of things, but using it to falsify the level of public engagement on a political issue is a clear misuse, it's easy to see how that could make democracy not work as well, or backfire and be used as an argument that all the public sentiment about the issue is astroturfed.

  • A bundler, a transpiler, a runtime (designed to be a drop-in replacement for Node.js), test runner, and a package manager - all in one.

    Bun's single-file executables turned out to be perfect for distributing CLI tools. You can compile any JavaScript project into a self-contained binary—runs anywhere, even if the user doesn't have Bun or Node installed. Works with native addons. Fast startup. Easy to distribute.

  • Sounds like an additional reason to be doing it in a way where participants can't be debanked by payments middlemen

  • It will definitely affect incentives, which is worth considering imo. With property taxes, there is some controversy, since someone can be forced to sell if their home value balloons past their income level, although it also has arguably positive effects like disincentivizing holding on to hoards of vancant properties and doing nothing with them.

    I don't really know exactly how it would go, but my first thought is people would sell stocks more readily and often, both because they need to in order to cover taxes, and because selling triggering capital gains wouldn't be a thing to worry about anymore.

  • Part of the headache here is that this situation inherently props up a few monopolistic platforms, rather than allowing people to use whatever payment system is available in their own countries. Some of this can be worked around using cryptocurrencies – famously, the Mitra project leverages Monero for this very purpose, although I'm told it now can accept other forms of payment as well.

    Hell yeah, I didn't know about Mitra. It sounds like it's a Patreon esque kind of deal with what the payments part is for.

  • Not quite as bad as property tax on a home, since that taxes the total value rather than just the difference between what you used to have and have now.

  • Smart

  • Deleted

    Permanently Deleted

    Jump
  • Well, at least the advertising companies will lose money this way

  • That kind of painting seems more likely to come alive

  • eating grass will destroy your teeth

  • The article is saying that one of the main things they are trying to axe is Automatic Emergency Braking requirements, and it links to a page with this video. The people in the biggest vehicles will be mostly fine I think, it's everyone else that's in trouble here.