I really think they do use AI to make some decisions, it seems like the most reasonable explanation for some of the choices. Just for curiousity I used chatGPT to ask for plans to improve the situation in the Iran conflict. The top plan it gave was to use state-backed terrorism on ships and ports to create pressure while ensuring deniability. It referred to it as “hiring privateers to disrupt trade through seizing vessels or limited strikes”, it’s wild that AI/LLMs have gone from plagiarism machines to war crime generators.
I really think they do use AI to make some decisions, it seems like the most reasonable explanation for some of the choices. Just for curiousity I used chatGPT to ask for plans to improve the situation in the Iran conflict. The top plan it gave was to use state-backed terrorism on ships and ports to create pressure while ensuring deniability. It referred to it as “hiring privateers to disrupt trade through seizing vessels or limited strikes”, it’s wild that AI/LLMs have gone from plagiarism machines to war crime generators.
Not that wild when you consider all of the western “journalism” that is in their training data