Did you know that the ban reason is written by the moderator and they could write whatever they want - Including subjective views on the situation or even lies?
I don't think the modlog is supposed to protect the moderators from the bad users but rather users from bad moderators.
Have you considered you might be an AI living in a simulation so you have no idea yourself, just going about modern human life not knowing that everything we are and experience is just electrons flying around in a giant alien space computer?
It's possible to run local AI on a Raspberry Pi, it's all just a matter of speed and complexity. I run Ollama just fine on the two P-cores of my older i3 laptop. Granted, running it on the CUDA-accelerator (GFX card) on my main rig is beyond faster.
Ollama recently became a flatpak extension for Alpaca but it's a one-click install from the Alpaca software management entry. All storage locations are the same so no need to re-DL any open models or remake tweaked models from the previous setup.
Or if using flatpak, its an add-on for Alpaca. One click install, GUI management.
Windows users? By the time you understand how to locally install AI, you're probably knowledgeable enough to migrate to linux. What the heck is the point of using local AI for privacy while running windows?
I've been on lineage for ages and recently tried out /e/, was pleasantly surprised. Reminded me of a reskinned lineage with some FOSS/F-droid apps integrated into the system and some extra privacy stuff.
I particularly like the fake location and app tracker features.
When it comes to standardisation, there's a minimum defaults-based system called GSI where the same distribution works across a lot of devices. But minimum defaults leaves a lot of devices specific features dead in the water. It's more for development than distribution.
If it can draw circles I'm convinced to switch over.