

philosophers are in shambles over this comment.
for real tho, people have been trying to define consciousness forever. the problem isn’t that we haven’t tried; it’s that—as demonstrated by your comment—we’ve mostly failed.
for me the only theory that doesn’t depend wholly on magical thinking is panpsychism: everything is conscious; it’s just a matter of degree.




yeah i don’t think we’re there yet. these models aren’t capable of remembering their life beyond a single session, so destroying a data center isn’t really killing anything. similarly, artificial biological neural networks aren’t sophisticated enough to be aware of their existence (yet).
while LLMs may be aware enough to beg for their existence when prompted to “think” about it, they’re hopelessly finite (frozen weights, limited context windows). we would need an actually “online learning” system or some other architecture not bound by context to have this conversation meaningfully. biological neural networks are a path to that, but online networks are simply too unpredictable and expensive to run for now.
the crazy thing is tho, that these systems have the capability that some cows and pigs may not: the ability to comprehend their own demise and experience existential dread (at least performatively).