Will AI, left to its own devices, obliterate us if given the chance? From a military perspective, this is already beginning to materialise. Whilst international conventions and rules of engagement exist to constrain this military conduct, actual conflicts demonstrate that not all parties adhere to these frameworks equally.

Current systems already possess capabilities to identify and engage targets autonomously. This is particularly evident in cyber operations, where AI-driven disinformation campaigns, online disruption and distributed denial-of-service (DDoS) attacks are already occurring. These capabilities are expected to progress along a sliding scale: from being a nuisance to becoming a critical threat requiring increasingly active countermeasures.

Rather than a sudden, catastrophic takeover by autonomous systems, the trajectory is likely to be incremental. This creates a cycle of escalation where human operators leave increasingly more control over to AI augmented systems simply to maintain parity with the adversary. In such a scenario, whilst it may appear that AI has assumed control, the reality is that humans have deliberately enabled AI autonomy as a necessary response to competitive pressures.

A significant source of uncertainty for me stems from my experience when the first GPT model was released. I was genuinely shocked by its capabilities. This makes it difficult to imagine what future models might achieve when the next breakthrough occurs, particularly given that the previous leap surprised me so profoundly, even though we’ve now again became accustomed to the new baseline of capability.