Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)F
Posts
3
Comments
369
Joined
3 yr. ago

  • A worldview is your current representational model of the world around you, so for example you know you're a human on earth in a physical universe when a set of rules, you have a mental representation of your body and it's capabilities, your location and the physicality of the things in your location. It can also be abstract things too, like your personality and your relationships and your understanding of what's capable in the world.

    Basically, you live in reality, but you need a way to store a representation of that reality in your mind in order to be able to interact with and understand that reality.

    The simulation part is your ability to imagine manipulating that reality to achieve a goal, and if you break that down, you're trying to convert reality from your perceived current real state A, to a imagined desired state B. Reasoning is coming up with a plan to convert the worldview from state A to state B step by step, so let's say you want to brush your teeth, you a want to convert your worldview of you having dirty teeth to you having clean teeth, and to do that you reason that you need to follow a few steps to achieve that, like moving your body to the bathroom, retrieving tools (toothbrush and toothpaste) and applying mechanical action to your teeth to clean them. You created a step by step plan to change the state of your worldview to a new desired state you came up with. It doesn't need to be physical either, it could be an abstract goal, like calculating a tip for a bill. It can also be a grand goal, like going to college or creating a mathematical proof.

    LLMs don't have a representational model of the world, they don't have a working memory or a world simulation to use as a scratchpad for testing out reasoning. They just take a sequence of words and retrieve the next word that is probabilistically and relationally likely to be a good next word based on its training data.

    They could be a really important cortex that can assist in developing a worldview model, but in their current granular state of being a single task AI model, they cannot do reasoning on their own.

    Knowledge retrieval is an important component that assists in reasoning though, so it can still play a very important role in reasoning.

  • That's all it didn't horribly fail at

  • I'm pretty sure they fired their Google assistant team already, so that's probably part of it.

  • They seem to be pushing it on the productivity angle, but until someone makes a super light weight and open air headset I'm not wearing it for 8 hours a day.

  • There was an old xkcd reposted recently about how both the earth and sun revolve each other and the truth is in the middle. The center of rotation is still inside the sun though. The middle depends on the magnitude of the lie, and Russia lies much more.

  • A worldview simulation it can use as a scratch pad for reasoning. I view reasoning as a set of simulated actions to convert a worldview from state a to state b.

    It depends on how you define intelligence though. Normally people define it as human like, and I think there are 3 primary sub types of intelligence needed for cognizance, being reasoning, awareness, and knowledge. I think the current Gen is figuring out the knowledge type, but it needs to be combined with the other two to be complete.

  • I think it will get good enough to do simple tickets on its own with oversight, but I would not trust it without it submitting it via a pr for review and iteration.

    I agree, it would take at least a decade for fully autonomous programming, and frankly, by the time it can fully replace programmers it will be able to fully replace every office job, at which point were going to have to rethink everything.

  • How do people know that you're on the verge of burnout? They're not mind readers. They probably see you over achieving and just think wow, that person's really got their shit together without realizing that you're pushing yourself too hard.

  • No, his name was Percival Computer. Personal computer is just a common mishearing of the word.

  • It should also penalize the decision makers who hide behind the facade of a faceless corporation to allow them to do terrible things without consequences

  • While I'm not outraged at Google, I do think they should absolutely add the help notice to the men's one since there's no downside to it and some people could still be helped

  • Depends on how you define thinking. I agree, LLMs could be a component of thinking, specifically knowledge and recall.

  • Lol yup, some people think they're real smart for realizing how limited LLMs are, but they don't recognize that the researchers that actually work on this are years ahead on experimentation and theory already and have already realized all this stuff and more. They're not just making the specific models better, they're also figuring out how to combine them to make something more generally intelligent instead of super specialized.

  • It's not linear either. Brains are crazy complex and have sub cortexes that are more specialized to specific tasks. I really don't think that LLMs alone can possibly demonstrate advanced intelligence, but I do think it could be a very important cortex for one. There's also different types of intelligence. LLMs are very knowledgeable and have great recall but lack reasoning or worldview.

  • Good. It's dangerous to view AI as magic. I've had to debate way too many people who think they LLMs are actually intelligent. It's dangerous to overestimate their capabilities lest we use them for tasks they can't perform safely. It's very powerful but the fact that it's totally non deterministic and unpredictable means we need to very carefully design systems that rely on LLMs with heavy guards rails.

  • I thought legit ones existed, but I guess the concept exists but hasn't been paired with technology and scaled. Tech bros are more concerned about making a cheap buck than providing a good service so they'd rather come up with a shitty addictive service that you have to pay for forever rather than coming up with an efficient service that actually achieves the goal.

  • Are you still optimistic after looking at the price of a Mercedes EV?

  • That would be the dumbest decision ever. Email is a prime space to be revolutionized by AI.

  • I think you're describing professional match makers?

  • The context is important since it informs us about why he's doing this, which is probably to further inflate his stock value