Yeah right now you have to know what's possible and nudge the ai in the right direction to use the correct approach according to you if you want it to do things in an optimized way
You are in a way correct. If you keep sending the context of the "conversation" (in the same chat) it will reinforce its previous implementation.
The way ais remember stuff is that you just give it the entire thread of context together with your new question. It's all just text in text out.
But once you start a new conversation (meaning you don't give any previous chat history) it's essentially a "new" ai which didn't know anything about your project.
This will have a new random seed and if you ask that to look for mistakes etc it will happily tell you that the last Implementation was all wrong and here's how to fix it.
It's like a minecraft world, same seed will get you the same map every time. So with AIs it's the same thing ish. start a new conversation or ask a different model (gpt, Google, Claude etc) and it will do things in a new way.
I'm just not following the mindset of "get ai to code your whole program" and then have real people maintain it? Sounds counter productive
I think you need to make your code for an Ai to maintain. Use Static code analysers like SonarQube to ensure that the code is maintainable (cognitive complexity)!and that functions are small and well defined as you write it.
Is also not controlled or regulated. Which is by design. But it carries massive issues of scams, casino like structures and no protections for the weak
Are there security issues reported? Is open source