AI could system test, I'd just rather it was a different AI to the one that did the coding. Point being if an entity can write and test code, the same shortcomings that lead to a bug being introduced make it less likely to be found if that entity is also doing the testing. Whether that's because of a logic fault, or just a misinterpretation of the requirements.
If the same entity is both writing the code and testing the code it's not great. Even though the code in this instance is there to support testing, it's still being crafted by an ai that may one day also be expanded to create the tests themselves. It's that scenario I'm concerned about. It's part of the reason we do more than just unit test.
Agreed. This project is currently about validating the test coverage/power. It's not too much of a stretch to envisage it one day actually designing/writing the tests themselves, but this is only a first step. And if it did end up creating tests and also editing the code under test, would there not be an independence issue?
A great read, if very slightly outdated. Could still represent a possible future if Trump returns to power, or perhaps even if he doesn't. Either way, Arms Control Wonk podcast is an entertaining and informative listen for anyone remotely interested in this topic.
It's RomHacking.net.