Well let me just ask you a question: how much say should an AI have in the decision to kill a human being? What percentage do you think is appropriate?
Instead of answering this question, I’ll direct you to some tangential research that may help you answer this question yourself. I’d like you to read a bit on different ethical frameworks (you can just wiki that one), then I’d like you to apply that to some of the openly available policies, contracts and practices of the company. At that point you should have your answer. Thank you in advance for doing your own research 😉
This was the original comment you made. The joke was “ajar heads”. This is a very clever little joke, if based on a misunderstanding of the term jar head. I don’t see how you could think this was some “fantasy”, friend, but maybe now I’m misunderstanding lol
What’s the book called, and is it a Langdon book?