

I have a hard time considering something that has an immutable state as sentient, but since there’s no real definition of sentience, that’s a personal decision.
Technical challenges aside, there’s no explicit reason that LLMs can’t do self-reinforcement of their own models.
I think animal brains are also “fairly” deterministic, but their behaviour is also dependent on the presence of various neurotransmitters, so there’s a temporal/contextual element to it, so situationally our emotions can affect our thoughts which LLMs don’t really have either.
I guess it’d be possible to forward feed an “emotional state” as part of the LLM’s context to emulate that sort of animal brain behaviour.
I know she’ll be fine, but I’d also prefer that future career politicians go study at places that still have a shred of integrity left.