That's on me tbh. I have a script that munges the image and text together into a single image (since it's plain text on the website), and the library I'm using occasionally lays out text in a weird way like this. I've been meaning to fix that, and the fact that it doesn't handle italicized text properly.
Recently some malicious users started to use an exploit where they would post rule violating content and then delete the account. This would prevent admins and mods from viewing the user profile to find other posts, and would also prevent federation of ban actions.
The new release fixes these problems. Thanks to @flamingos-cant for contributing to solve this.
Now, here's an idea that just plain and simple didn't work. (Of course, it has plenty of company in that regard.)
I was thinking about Western films and that common scene of some guy getting thrown out the swinging doors and into the street. In this case, every customer in the place is either running or being thrown out―implying that there's a pretty tough and angry character somewhere inside. And how tough a guy is this mystery person? Well, that's his bear parked outside. It's confusing, obtuse, esoteric, and strange―in other words, it's a Far Side cartoon.
Well, it seems kind of absurd, but why doesn't a thermometer have a world model? Taken as a system, it's "conscious" of the temperature.
If you scale up enough mechanical feedback loops or if/then statements, why don't you get something you can call "conscious"?
The distinction you're making between online and offline seems to be orthogonal. Would an alien species much more complex than us laugh and say "Of course humans are entirely reactive, not capable of true thought. All their short lives are spent reacting to input, some of it just takes longer to process than other input"? Conversely, if a pile of if/then statements is complex enough that it appears to be decoupled from immediate sensory input like a Busy beaver, is that good enough?
Put another way, try to have a truly novel thought, unrelated to the total input you've received in your life. Are you just reactive?
I think pointing out the circular definition is important, because even in this comment, you've said "To be aware of the difference between self means to be able to [be aware of] stimuli originating from the self, [be aware of] stimuli not from the self, ...". Sure, but that doesn't provide a useful framework IMO.
For qualia, I'm not concerned about the complexity of the human brain, or different neural structures. It might be hard with our current knowledge and technology, but that's just a skill issue. I think it's likely that at some point, humankind will be able to compare two brains with different neural structures, or even wildly different substrates like human brain vs animal, alien, AI, whatever. We'll have a coherent way of comparing representations across those and deciding if they're equivalent, and that's good enough for me.
I think we agree on LLMs and chess engines, they don't learn as you use them. I've worked with both under the hood, and my point is exactly that: they're a good demonstration that awareness (i.e. to me, having a world model) and learning are related but different.
Anyways, I'm interested in hearing more about your project if it's publicly available somewhere
I think the definition of consciousness meaning "internal state that observably correlates to external state" would clarify here. Gravel wouldn't be conscious, because it has no internal state that we can point to and say it correlates to external state. Galaxies/the universe doesn't either, as far as we can tell. Galaxies don't have internal state that represents e.g. other galaxies, other than including humans in that definition, but it would be more proper IMO to limit the definition the minimum amount of state possible. You don't count the galaxy as having internal state that represents external state, if you can limit that definition to one tiny, self-contained part of the galaxy, i.e. a human brain.
I made another comment pointing this out for a similar definition, but OK so awareness is being able to "recognize", and recognize in turn means "To realize or discover the nature of something" (using Wiktionary, but pick your favorite dictionary), and "realize" means "To become aware of or understand", completing the loop. I point that out, because IMO the circularity means the whole thing is useless from an empirical perspective and should be discarded. I also think qualia is just philosophical navel-gazing for what it's worth, much like common definitions of "awareness". I think it's perfectly possible in theory to read someone's brain to see how something is represented and then twiddle someone else's brain in the same way to cause the same experience, or compare the two to see if they're equivalent.
As far as a computer process recognizing itself, it certainly can compare itself to other processes. It can e.g. iterate through the list of processes and kill everything that isn't itself. It can look at processes and say "this other process consumes more memory than I do". It's super primitive and hardcoded, but why doesn't that count? I also thinking learning is separate but related. If we take the definition of "consciousness" as a world model or representation, learning is simply how you expand that world model based on input. Something can have a world model without any ability to learn, such as a chess engine. It models chess very well and better than humans, but is incapable of learning anything else, i.e. expanding its world model beyond chess.
If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.
I think we largely agree then, other than my quibble about learning not being necessary. A lot of people want to reject the idea of machines being conscious, but I've reached the "Sure, why not?" stage. To be a useful definition though, we need to go beyond that and start asking questions like "Conscious of what?"
What do "sense" and "perceived" mean? I think they both loop back to "aware", and the reason I point that out is that circular definitions are useless. How can you say that plants lack a sense of self and consciousness, if you can't even define those terms properly? What about crown shyness? Trees seem to be able to tell the difference between themselves and others.
As an example of the circularity, "sense" means (using Wiktionary, but pick your favorite if you don't like it) "Any of the manners by which living beings perceive the physical world". What does "perceive" mean? "To become aware of, through the physical senses". So in your definition, "aware" loops back to "aware" (Wiktionary also has a definition of "sense" that just defines it as "awareness", for a more direct route, too).
I meant that plants don't have thoughts more in the sense of "woah, dude", pushing back on something without any explanatory power. But really, how do you define "thought"? I actually think Wiktionary is slightly more helpful here, in that it defines "thought" as "A representation created in the mind without the use of one's faculties of vision, sound, smell, touch, or taste". That's kind of getting to what I've commented elsewhere, with trying to come up with a more objective definition based around "world model". Basing all of these definitions on "representation" or "world model" seems to the closest to an objective definition as we can get.
Of course, that brings up the question of "What is a model?" / "What does represent mean?". Is that just pushing the circularity elsewhere? I think not, if you accept a looser definition. If anything has an internal state that appears to correlate to external state, then it has a world model, and is at some level "conscious". You have to accept things that many people don't want to, such as that AI is conscious of much of the universe (albeit experienced through the narrow peephole of human-written text). I just kind of embraced that though and said "sure, why not?". Maybe it's not satisfying philosophically, but it's pragmatically useful.
Yeah, reflexes could be considered a conscious effort of a part of your body. Or your immune system might be considered "conscious" of a virus that it's fighting off. What's a testable definition of "conscious" that excludes those?
I think that "conscious" is also a relative term, i.e. "Conscious of what?" A cell in your body could be said to be conscious of a few things, like its immediate environment. It's clearly not conscious of J-pop though. But to be fair to it, none of us are "really" conscious of say Sagittarius B2 or an organism living at the bottom of the ocean.
The best way I've found to think about it is that consciousness can be thought of as a world model. The bigger the world model, the more consciousness it could be said to have. Some world models might be smaller, but contain things that bigger ones don't though. Worms don't understand what an airplane is, but humans also don't really understand the experience of wriggling through soil.
I'm not advocating for consciousness as a fundamental quality of the universe. I think that lacks explanatory power and isn't really in the realm of science. I'm kind of coming at it the opposite way and pushing for a more concrete and empirical definition of consciousness.
What does "aware" mean, or "knowledge"? I think those are going to be circular definitions, maybe filtered through a few other words like "comprehend" or "perceive".
Does a plant act with deliberate intention when it starts growing from a seed?
To be clear, my beef is more with the definition of "conscious" being useless and/or circular in most cases. I'm not saying "woah, what if plants have thoughts dude" as in the meme, but whatever definition you come up with, you have to evaluate why it does or doesn't include plants, simple animals, or AI.
When you say "aware of the delineation between self and not self", what do you mean by "aware"? I've found that it's often a circular definition, maybe with a few extra words thrown in to obscure the chain, like "know", "comprehend", "perceive", etc.
Also, is a computer program that knows which process it is self aware? If not, why? It's so simple, and yet without a concrete definition it's hard to really reject that.
On the other extreme, are we truly self aware? As you point out, our bodies just kind of do stuff without our knowledge. Would an alien species laugh at the idea of us being self-aware, having just faint glimmers of self awareness compared to them, much like the computer program seems to us?
I don't think I'm talking about panpsychism. To me, that's just giving up and hand wavey. I'm much more interested in trying to come up with a more concrete, empirical definition. I think questions like "Well, why aren't plants conscious" or "Why isn't an LLM conscious" are good ways to explore the limits of any particular definition and find things it fails to explain properly.
I don't think a rock or electron could be considered conscious, for example. Neither has an internal model of the world in any way.
It all depends on what you mean by "conscious", which IMO doesn't fall under "Maybe everything is conscious" because that's wrongly assuming that "conscious" is a binary property instead of a spectrum that humans and plants are both on while clearly being at vastly different levels. Maybe I just have a much looser definition of "conscious" than most people, but why don't tropisms count as a very primitive form of consciousness?
My brother once woke up screaming in the middle of the night from a nightmare. In his dream, a wolf, with "pure, white eyes" and walking on its hind legs, was trying to get him. He was able to quickly dismiss the ordeal, but he told the story so vividly that his younger sibling (me) could never shake the image. Ironically, my brother's nightmare ended up scaring me for years. The creature on the right in this cartoon closely resembles the "wolf" as I've always pictured it.
In bed at night, I was so scared of this and other monsters that I nearly suffocated trying to stay completely under the blankets. Any exposed skin meant certain death.
The monster snorkel would have been a wonderful thing in my little world. (It still would be.)
In his online postings, Titor claimed to be an American soldier from the year 2036, based in Tampa, Florida. He said that he was assigned to a governmental time-travel project, and that as part of the project he was sent back to 1975 to retrieve an IBM 5100 computer, which was needed to debug various legacy computer programs that existed in 2036
I tried feeding frozen peas to ducks in a pond near me. The peas mostly sank below the water immediately, and the ducks didn't seem to care for them anyways. A few of them came over to investigate and weren't interested after checking them out. I might've been doing it wrong, or maybe the ducks just were just too used to getting fed bread.
That's on me tbh. I have a script that munges the image and text together into a single image (since it's plain text on the website), and the library I'm using occasionally lays out text in a weird way like this. I've been meaning to fix that, and the fact that it doesn't handle italicized text properly.