Skip Navigation

  • I think pointing out the circular definition is important, because even in this comment, you've said "To be aware of the difference between self means to be able to [be aware of] stimuli originating from the self, [be aware of] stimuli not from the self, ...". Sure, but that doesn't provide a useful framework IMO.

    For qualia, I'm not concerned about the complexity of the human brain, or different neural structures. It might be hard with our current knowledge and technology, but that's just a skill issue. I think it's likely that at some point, humankind will be able to compare two brains with different neural structures, or even wildly different substrates like human brain vs animal, alien, AI, whatever. We'll have a coherent way of comparing representations across those and deciding if they're equivalent, and that's good enough for me.

    I think we agree on LLMs and chess engines, they don't learn as you use them. I've worked with both under the hood, and my point is exactly that: they're a good demonstration that awareness (i.e. to me, having a world model) and learning are related but different.

    Anyways, I'm interested in hearing more about your project if it's publicly available somewhere

  • I think the definition of consciousness meaning "internal state that observably correlates to external state" would clarify here. Gravel wouldn't be conscious, because it has no internal state that we can point to and say it correlates to external state. Galaxies/the universe doesn't either, as far as we can tell. Galaxies don't have internal state that represents e.g. other galaxies, other than including humans in that definition, but it would be more proper IMO to limit the definition the minimum amount of state possible. You don't count the galaxy as having internal state that represents external state, if you can limit that definition to one tiny, self-contained part of the galaxy, i.e. a human brain.

  • I made another comment pointing this out for a similar definition, but OK so awareness is being able to "recognize", and recognize in turn means "To realize or discover the nature of something" (using Wiktionary, but pick your favorite dictionary), and "realize" means "To become aware of or understand", completing the loop. I point that out, because IMO the circularity means the whole thing is useless from an empirical perspective and should be discarded. I also think qualia is just philosophical navel-gazing for what it's worth, much like common definitions of "awareness". I think it's perfectly possible in theory to read someone's brain to see how something is represented and then twiddle someone else's brain in the same way to cause the same experience, or compare the two to see if they're equivalent.

    As far as a computer process recognizing itself, it certainly can compare itself to other processes. It can e.g. iterate through the list of processes and kill everything that isn't itself. It can look at processes and say "this other process consumes more memory than I do". It's super primitive and hardcoded, but why doesn't that count? I also thinking learning is separate but related. If we take the definition of "consciousness" as a world model or representation, learning is simply how you expand that world model based on input. Something can have a world model without any ability to learn, such as a chess engine. It models chess very well and better than humans, but is incapable of learning anything else, i.e. expanding its world model beyond chess.

    If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.

    I think we largely agree then, other than my quibble about learning not being necessary. A lot of people want to reject the idea of machines being conscious, but I've reached the "Sure, why not?" stage. To be a useful definition though, we need to go beyond that and start asking questions like "Conscious of what?"

  • What do "sense" and "perceived" mean? I think they both loop back to "aware", and the reason I point that out is that circular definitions are useless. How can you say that plants lack a sense of self and consciousness, if you can't even define those terms properly? What about crown shyness? Trees seem to be able to tell the difference between themselves and others.

    As an example of the circularity, "sense" means (using Wiktionary, but pick your favorite if you don't like it) "Any of the manners by which living beings perceive the physical world". What does "perceive" mean? "To become aware of, through the physical senses". So in your definition, "aware" loops back to "aware" (Wiktionary also has a definition of "sense" that just defines it as "awareness", for a more direct route, too).

    I meant that plants don't have thoughts more in the sense of "woah, dude", pushing back on something without any explanatory power. But really, how do you define "thought"? I actually think Wiktionary is slightly more helpful here, in that it defines "thought" as "A representation created in the mind without the use of one's faculties of vision, sound, smell, touch, or taste". That's kind of getting to what I've commented elsewhere, with trying to come up with a more objective definition based around "world model". Basing all of these definitions on "representation" or "world model" seems to the closest to an objective definition as we can get.

    Of course, that brings up the question of "What is a model?" / "What does represent mean?". Is that just pushing the circularity elsewhere? I think not, if you accept a looser definition. If anything has an internal state that appears to correlate to external state, then it has a world model, and is at some level "conscious". You have to accept things that many people don't want to, such as that AI is conscious of much of the universe (albeit experienced through the narrow peephole of human-written text). I just kind of embraced that though and said "sure, why not?". Maybe it's not satisfying philosophically, but it's pragmatically useful.

  • The Far Side @sh.itjust.works

    2025-12-07

  • The Far Side @sh.itjust.works

    2025-12-07

  • Yeah, reflexes could be considered a conscious effort of a part of your body. Or your immune system might be considered "conscious" of a virus that it's fighting off. What's a testable definition of "conscious" that excludes those?

    I think that "conscious" is also a relative term, i.e. "Conscious of what?" A cell in your body could be said to be conscious of a few things, like its immediate environment. It's clearly not conscious of J-pop though. But to be fair to it, none of us are "really" conscious of say Sagittarius B2 or an organism living at the bottom of the ocean.

    The best way I've found to think about it is that consciousness can be thought of as a world model. The bigger the world model, the more consciousness it could be said to have. Some world models might be smaller, but contain things that bigger ones don't though. Worms don't understand what an airplane is, but humans also don't really understand the experience of wriggling through soil.

  • I'm not advocating for consciousness as a fundamental quality of the universe. I think that lacks explanatory power and isn't really in the realm of science. I'm kind of coming at it the opposite way and pushing for a more concrete and empirical definition of consciousness.

  • What does "aware" mean, or "knowledge"? I think those are going to be circular definitions, maybe filtered through a few other words like "comprehend" or "perceive".

    Does a plant act with deliberate intention when it starts growing from a seed?

    To be clear, my beef is more with the definition of "conscious" being useless and/or circular in most cases. I'm not saying "woah, what if plants have thoughts dude" as in the meme, but whatever definition you come up with, you have to evaluate why it does or doesn't include plants, simple animals, or AI.

  • When you say "aware of the delineation between self and not self", what do you mean by "aware"? I've found that it's often a circular definition, maybe with a few extra words thrown in to obscure the chain, like "know", "comprehend", "perceive", etc.

    Also, is a computer program that knows which process it is self aware? If not, why? It's so simple, and yet without a concrete definition it's hard to really reject that.

    On the other extreme, are we truly self aware? As you point out, our bodies just kind of do stuff without our knowledge. Would an alien species laugh at the idea of us being self-aware, having just faint glimmers of self awareness compared to them, much like the computer program seems to us?

  • I don't think I'm talking about panpsychism. To me, that's just giving up and hand wavey. I'm much more interested in trying to come up with a more concrete, empirical definition. I think questions like "Well, why aren't plants conscious" or "Why isn't an LLM conscious" are good ways to explore the limits of any particular definition and find things it fails to explain properly.

    I don't think a rock or electron could be considered conscious, for example. Neither has an internal model of the world in any way.

  • The Far Side @sh.itjust.works

    2025-12-06

  • The Far Side @sh.itjust.works

    2025-12-06

  • It all depends on what you mean by "conscious", which IMO doesn't fall under "Maybe everything is conscious" because that's wrongly assuming that "conscious" is a binary property instead of a spectrum that humans and plants are both on while clearly being at vastly different levels. Maybe I just have a much looser definition of "conscious" than most people, but why don't tropisms count as a very primitive form of consciousness?

  • The Far Side @sh.itjust.works

    2025-12-04

  • The Far Side @sh.itjust.works

    2025-12-04

  • The Far Side @sh.itjust.works

    2025-12-04

  • The Far Side @sh.itjust.works

    2025-12-04

  • The Far Side @sh.itjust.works

    2025-12-04

  • Some background on this comic:

    Transcript:

    My brother once woke up screaming in the middle of the night from a nightmare. In his dream, a wolf, with "pure, white eyes" and walking on its hind legs, was trying to get him. He was able to quickly dismiss the ordeal, but he told the story so vividly that his younger sibling (me) could never shake the image. Ironically, my brother's nightmare ended up scaring me for years. The creature on the right in this cartoon closely resembles the "wolf" as I've always pictured it.

    In bed at night, I was so scared of this and other monsters that I nearly suffocated trying to stay completely under the blankets. Any exposed skin meant certain death.

    The monster snorkel would have been a wonderful thing in my little world. (It still would be.)

  • The Far Side @sh.itjust.works

    2025-12-05

  • The Far Side @sh.itjust.works

    2025-12-05

  • The Far Side @sh.itjust.works

    2025-12-05

  • The Far Side @sh.itjust.works

    2025-12-05

  • The Far Side @sh.itjust.works

    2025-12-05

  • The big news will be that John Titor is being sent back in time to save us from the Epochalypse:

    https://en.wikipedia.org/wiki/John_Titor

    In his online postings, Titor claimed to be an American soldier from the year 2036, based in Tampa, Florida. He said that he was assigned to a governmental time-travel project, and that as part of the project he was sent back to 1975 to retrieve an IBM 5100 computer, which was needed to debug various legacy computer programs that existed in 2036

  • I tried feeding frozen peas to ducks in a pond near me. The peas mostly sank below the water immediately, and the ducks didn't seem to care for them anyways. A few of them came over to investigate and weren't interested after checking them out. I might've been doing it wrong, or maybe the ducks just were just too used to getting fed bread.

  • Zaphod Beeblebrox's earlier years

  • The Far Side @sh.itjust.works

    2025-12-03

  • The Far Side @sh.itjust.works

    2025-12-03

  • The Far Side @sh.itjust.works

    2025-12-03

  • The Far Side @sh.itjust.works

    2025-12-03

  • The Far Side @sh.itjust.works

    2025-12-03

  • The Far Side @sh.itjust.works

    2025-12-02

  • Yeah, even accounting for perspective, the ratio seems off

  • Interestingly, he redrew this for The Far Side, it previously appeared in his earlier strip, Nature's Way:

  • Some background on this comic:

    Transcript:

    The flak over the "Tethercat" cartoon is of a sort I always find interesting. I could understand the problem if these were kids batting an animal around a pole, but that natural animosity between dogs and cats has always provided fodder for humor in various forms. In animated children's cartoons, for example, dogs and cats are constantly getting smashed into oblivion by a variety of violent means. (I'd like to know if the creators of "Tom and Jerry" got these letters. Probably, so that doesn't help me.)

    What I think I've figured out is, in animation, a cat might be flattened by a steamroller or get blown up by dynamite, but a few seconds later we see him back in business―chasing something or being chased until he's "killed" again. There's never a suggestion that the cat's suffering is anything but transitory. In a single-panel cartoon, however, no resolution is possible. The dogs play "tethercat" forever. You put the cartoon down, come back to it a few hours later, and, yep―those dogs are still playing "tethercat." I suppose some people may have appreciated a disclaimer at the bottom of the cartoon saying, "Note: A few minutes later, the cat escaped, returned with a bazooka and blew the dogs away." (Of course, now I'm on the dogs' case.)