Skip to content

The Machines Are Here

AI wargaming and battle management for NATO. How three experiments show the future of human-machine teaming in combat, strategy, and command.

Photo by Joshua Sortino / Unsplash

S L Nelson
S L Nelson has served from the tactical to strategic level as a military officer. His views are his own and do not represent the position of the US DoD.

The commander leans forward in his chair: eyes fixed on the glowing screen in front of him. Outside, Tromsø’s Arctic winds howl against the windows, rattling the glass as if to remind those inside that the High North is unforgiving. On the screen, however, the battlefield is not ice and snow – it is data. Rows of machine-generated solutions flicker across the interface, each ranked by intent, each offering a path forward. The commander exhales. In the past, he might have asked his staff for three plans. Now, the machine has given him thousands.

This is the new reality of warfare. Artificial intelligence is no longer a futuristic accessory, but it is already reshaping how commanders think, decide, and fight. Three recent experiments reveal the contours of this transformation: the US Air Force’s DASH 2 wargame, Johns Hopkins Applied Physics Lab’s (APL) generative wargaming platform, and APL’s VIPR AI copilot for fighter pilots. Each offers a glimpse into how AI is not simply a tool, but a partner. Together, they suggest that the future commander will not fight alone.

Speed and Scale in Battle Management

At the Air Force’s DASH 2 exercise, the numbers were staggering. Machines produced recommendations 16 times faster than humans, generating 30 times more solutions in a single hour. Two vendors alone churned out more than 6,000 solutions for 20 problems, ranking them according to commander’s intent and the evolving situation

The experiment was not about replacing human judgment. It was about relieving cognitive overload. Coders built an AI microservice for “match effectors” – the subfunction that decides which weapons system is best suited to destroy a given target. Instead of a commander juggling dozens of variables under pressure, the AI ingested battlefield data and produced a ranked list of options.

What is striking is not only the speed, but the quality. Error rates were comparable to human performance, despite the code being written in just two weeks. One algorithm tweak could have boosted solution validity from 70 per cent to 90 per cent. In other words, the machines are not just fast but they are precisely fast.

For commanders in the High North, where Russian submarines lurk beneath thinning ice and Chinese research vessels double as surveillance platforms, this kind of machine-speed battle management could be decisive. The Arctic is vast, unforgiving, and data rich. Human cognition alone cannot keep pace. But human cognition can be improved as John Hopkins APL demonstrates when generative wargaming provides an optimized route derived from thousands of simulated trips.

Generative Wargaming: Thousands of Scenarios, Not Just Three

At Johns Hopkins Applied Physics Lab, the vision is even more radical. Instead of months-long planning cycles, the lab’s generative wargaming platform promises thousands of repetitions on a desktop computer (JHU APL).

Traditionally, a commander might ask officers to produce three plans, then “wargame out” the options. AI can play that role – but over thousands of variations instead of a handful. The system interprets player actions, human or machine, and analyzes motivations. It is not just about outcomes but about understanding why decisions were made.

This is where explainability becomes critical. As Bob Chalmers, who leads the Algorithmic Warfare Analysis Section at APL, put it:

Explainability will be a critical part of that new relationship, to give our future commanders the appropriate confidence in their new AI subordinates that their suggestions are rooted in sound reasoning and assumptions.

The implications are profound. Imagine a Norwegian commander in Tromsø running thousands of Arctic scenarios overnight, each exploring different Russian responses to NATO patrols. Instead of waiting months for a full-scale exercise, the commander wakes up to a library of strategies, each annotated with the reasoning behind them.

This is not science fiction. It is happening now. And it raises a question: will commanders trust the machine’s logic when it diverges from human instinct? Another APL experiment, VIPR, just might provide enough information to understand the reasoning of wargamers, algorithms, pilots, and machines.

VIPR: The AI Copilot in the Cockpit

The third experiment takes AI from the war room to the cockpit. APL’s VIPR (Virtual Intelligent Peer-Reasoning agent) is a “software wingman” designed to augment fighter pilots. The software provides insight into pilot decisions, much as the previous experiment uses the same principles to allows wargame participants to assess strategies and decisions on strategies.

VIPR tracks pilot intent, predicts adversary maneuvers using graph neural networks, and provides tactical advice at machine speed. It fills blind spots, ensures alignment, and offers reinforcement learning-driven insights in real time. Early simulations show pilots survive longer and perform better with VIPR assistance.

This is human-machine teaming at its sharpest edge. The AI does not replace the pilot. It becomes a trusted teammate, one that can process data faster than human cognition but still align with human intent.

For Arctic air patrols, where Russian bombers test NATO air defenses and weather conditions can turn lethal in minutes, such a copilot could mean the difference between survival and catastrophe.

The Larger Picture

Taken together, these experiments reveal a pattern: AI accelerates decision-making, expands strategic horizons, and augments human operators. The machines are not replacing humans. They are extending them.

But the cultural challenge is as significant as the technical one. Commanders and pilots must learn to trust machines that think differently. They must accept recommendations that may feel counterintuitive yet are rooted in sound reasoning. They must integrate AI without losing human judgment.

The High North is a fitting scenario. It is a region in flux: melting ice, shifting alliances, rising competition. Just as the geography is changing, so too is the cognitive terrain of warfare. The commanders of tomorrow will not only navigate complex currents in Arctic waters. They will navigate the multi-faceted partnership between human intuition and machine courses of action.

The Next Picture

The machines are here. They think faster, generate more, and explain themselves better than ever before. In Tromsø, in Washington, in the skies above the Arctic, the future battlespace is being rehearsed – not with tanks and ships alone, but with algorithms and copilots.

The defense community must now grapple with the implications: how to integrate AI without losing human judgment, how to build trust in machine reasoning, and how to ensure adversaries do not outpace NATO in this race.

The side that learns to fight with machines, not against them, will own the future battlespace. And that future is arriving faster than the ice can melt.

This article was originally published by RealClearDefense and made available via RealClearWire.

Latest