Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)C
Posts
0
Comments
772
Joined
6 mo. ago

  • I'm well aware of how llms work. And I'm pretty sure the apple part in the prompt would trigger significant activity in the areas related to apples. It's obviously not a thought about apples the way a human would. The complexity and the structure of a human brain is very different. But the llm does have a model of how the world works from its token relationship perspective. That's what it's doing - following a model. It's nothing like human thought, but it's really just a matter of degrees. Sure apples to justice is a good description. And t doesn't 'ponder' because we don't feedback continuously in a typical llm setup, although I suspect that's coming. But what we're doing with llms is a basis of thought. I see no fundamental difference except scales between current llms and human brains.

  • You can survive. From knowing that eating alleviates hunger, to knowing what to say to get an idea across, to designing new high tech that improves the quality of lives. It all requires that we model reality in some form.

  • It only failed because the likes of you made it fail. Congratulations on trump btw.

    1. And we'll take delivery of those. And gain experience using them. Maybe the U.S. will return to sanity in the future and we can move forward with more.

  • Again, to understand our observable reality and make predictions.

  • You have to understand instructions on some level to be able to follow them. 👍🏻

  • Well, the LLM does briefly 'think' about apples in that it activates its 'thought' areas relating to apples (the token repressing apples in its system). Right now, an llm's internal experience is based on its previous training and the current prompt while it's running. Our brains are always on and circulating thoughts, so of course that's a very different concept of experience. But you can bet there are people working on building an ai system (with llm components) that works that way too. The line will get increasingly blurred. Or brain processing is just an organic based statistical model with complex state management and chemical based timing control.

  • If you're going to define it that way, then obviously that's how it is. But do you really understand what understanding is?

  • Locked

    Rent is theft

    Jump
  • I think you underestimate the effort / work being a landlord. I'm not a landlord myself, but I own my own house and I know how much effort it takes to do upkeep or even to manage others to do the work.

    Yes, people usually want to make the most money. And if we shift to government 'free' mortgages for everyone to build (or hire people to build) their own homes, then there will be many people who take advantage of that situation too. Either way, regulation is needed to keep the system on track.

    The capitalist system encourages people to invest in other people's housing - this is not an inherently bad thing. It can find efficiencies that governments never would. Housing is special because everyone needs it, so regulations are needed to ensure the market force efficiencies work for the benifit of the population in general. Government should provide for a baseline housing for all, if not for moral grounds, simply because it's cheaper than dealing with the costs of an unhoused population. However once we get to the next level up of housing comfort that people reasonably desire, a market economy can work well if properly regulated. As they say, from a pragmatic point of view, capitalism is the worst system except for all the others.

  • If you don't see the new things that computers can do with ai, then you are being purposely ignorant. There's tons of slop, along with useful capabilities; but even that slop generation is clearly a new ability computers didn't have before.

    And yes, if you can process written Chinese fully and respond to it, you do understand it.

  • That's a difficult question. The semantics of 'understand' and the metaphysics of how that might apply is rather unclear to me. LLMs have a certain consistent modeling which agrees with their output, so that's the same as human's thought which I think we'd agree is 'understanding'; but feeding 1+1 into a calculator will also consistently get the same result. Is that understanding? In some respects it is, the math is fully represented by the inner workings of the calculator. It doesn't feel to us like real understanding because it's trivial and very causal. I think that's just because the problem is so simple. So what we end up with is that assuming an ai is reasonably correct, whether it is really understanding is more a basis of the complexity it handles. And the complexity of human thought is much higher than current ai systems partly because we always hold all sorts of other thoughts and memories that can be independent of a particular task, but are combined at some level.So, in a way the llm construct understands its limited mapping of a problem. But even though it's using the same input /output language as humans do, current llms don't understand things at anywhere near the level that humans do.

  • I think maybe you misunderstand what a model is in this context. It's any way of mapping observations to a theory of how things work. I would say a good model is one that can create useful testable predictions. This tests the accuracy of the model, and it also provides for innovation. You can have a model based on a random sky fairy magically doing stuff and writing a book about it. But that model is untestable, and useless.

  • Locked

    Rent is theft

    Jump
  • Do you know the difference between profit and income for a personal landlord? Effectively not much. It's not just an investment for them, it's a good chunk of their job and their income. Often they are paying the mortgage with income from another job too.They can rent their property at a rent lower than upkeep because they are gaining capital that they can eventually sell.Larger landlords can even do better due to the economy of scale for upkeep costs.Unfortunately, landlords will often try to make the most and so maximize rent based on the market. The market should balance this out (ie if being a landlord is so lucrative, more people should become landlords and that would increase the competition and costs would go down). But many people don't want to figure out all the details, borrow large sums of money, take on the risk, take on the stress of managing tenants, etc. - which just shows the value added by the landlord is real. Of course without enough regulations, things can go wonky - like our current system with large corporate landlords. I'm not saying that's good. Just that the basic landlord concept isn't inherently flawed.

  • Locked

    Rent is theft

    Jump
  • Yes, they provide money (or some form of resource). And somebody/something has to provide that if the person who's going to live there can't. The capitalist system encourages people to do that. Otherwise the government has to. And governments generally are too big to do a good job (the people making the decisions don't care about the details, it becomes very inefficient). When people use their own money to try to make money, they tend to work out the most efficient way to do so. Of course, we need regulations so that the efficiency that the capitalist system brings benifits the goal of affordable, quality housing.

  • Locked

    Rent is theft

    Jump
  • If a co-op takes the loan, aren't they just becoming a landlord? And who does the work to organize it - are they paid? Isn't that just like a landlord taking profit?If you look at the government as just a collective of the people, then there's no magical entity 'eating the risk' - it just means the people get screwed over and/or someone doesn't get paid for their work.Yes, you can use a handyman to fix your roof, but you have to pay them. And if you can't afford to, you what - take more loan from the government which endlessly prints money?

  • I never said it's directly like an Ilm. That's a very specific form. The brain has many different structures - and the neural interconnections we can map have been shown to be a form of convolution in much the same way that many ai systems use (not by coincidence). Scientists generally avoid metaphysics like subjects of consciousness because it's inherently unprovable. We can look at the results of processing/thought and quantify the complexity and accuracy. We do this for children at various ages and can see how they learn to think in increasing complexity. We can do this for ai systems too. The leaps that we've seen over the last few years as computational power of computers has reached some threshold, show emergent abilities that only a decade ago were thought to be impossible. Since we can never know anyone else's experience, we can only go on input/output. And so if it looks like intelligence, then it is intelligence. Then the concept of 'thought' in this context is only semantics. There is, so far, nothing to suggest that magic is needed for our brains to think; it's just a physical process - so as we add more complexity and different structures to ai systems, there's no reason to think we can't make them do the same as our brains, or more.

  • A model is an understanding of how it works. It allows one to predict how things might react in different general cases, which can be very useful for innovation. You don't need to try understand things if you don't want to, but it's a bit ignorant sounding.

  • The human brain is exactly like an organic highly parallel computer system using convolution system just like ai models. It's just way more complex. We know how synapses work. We know the form of grey matter. It's too complex for us to model it all artificially at this point, but there's nothing indicating it requires a magical function to make it work.

  • There's no reason to think that thought and analysis that you perceive isn't based on such complex historical weighted averages in you brain. In fact, since we do know the basic fundamentals of how brains work, it would seem that's exactly what's happening.What's funny is people thinking their brain is anything magically different than an organic computer.