Friend, I’m going to be blunt: I think you may have spent time creating this with help from an LLM, and it told you too much of what you want to hear because that’s what they're literally trained to do.
As an example…”relativistic coherence?” Computational cycles and SHA512 checksums and bit flips and prime instances? You are mixing modern technical terms and highly speculative, theoretical concepts in a way that… just isn’t really compatible.
And the text, from what I can parse, is similar. It mixes a lot of contemporary “anthropic” concepts (money, the 24 hour day, and so on), terms that loosely apply to text LLMs, and a few highly speculative concepts that may or may not even apply to the future.
If you are concerned about AI safety, I think you should split your attention between contemporary, concrete systems we have now and the more abstract, philosophical research that’s been going on even before the LLM craze started. Not mix them together.
Look into what local LLM tweakers are doing. With, for instance, alignment datasets, experiments on “raw” pretrains, or more cutting edge abliration like: https://github.com/p-e-w/heretic
In other words, look at the concrete, and how actual safety systems can be applied now. Outlines like yours are interesting, but they can’t actually be applied or enforced.
And on the philosophical side, basically ignore any institute or effort started after 2021, when all the “Tech Bro” hype and modern LLMs muddied the waters. But there was plenty of safety research going on before then. There are already many documents/ideas similar to what you’re getting at in your outlines: https://en.wikipedia.org/wiki/AI_safety
Perhaps they should do it for more games than Skyrim.
There’s a great back catalog.