I'll sit out the vote itself, because I don't have enough personal interaction with Reddit to form an unbiased view of their workings.
That being said, I'm pretty cautious towards the idea of defederation; if the issue primarily stems from those communities, I'd prefer targeted bans. Otherwise, we're only shutting ourselves out as dissenting voices, which only leads to more of an echo chamber effect on feddit.
Don't hide from those who need confronting.
I'm going to throw my own thoughts in on this. I got into machine learning around 2015, back when relu activations were still bleeding edge innovations, and got out around 2020 for honestly pretty similar reasons.
Emotions can and have been used as optimisation targets. Engagement is an ever present target. And in the framework of capitalism, one optimisation targets rules above all others; alignment with continued use. It's part of what leads to the bootlicking LLM phenomenon. For the average human, it drives future engagement.
The real danger isn't the newer language models, or anything really to do with neural net architecture; rather, it's the fact that we've found that a simple function minimisation strategy can be used to approximate otherwise intractable functions. The deeper you research, the more clear it becomes that any arbitrary objective can be optimised, given a suitable function approximator and enough data to fit the approximator accurately.
Human minds are also universal function approximators.