Conversational AI and Intent Based Command: Training at the New Frontier of Command
This aarticle introduces a three part series by the co-founder of Second Line of Defense and the founder of Opérationnels SLDS : Soutien Logistique Défense Sécurité, Murielle Delaporte. Recently, she published a three part series looking at how conversational AI affected military training and the implications of this dynamic on operational capabilities.
She produces a French defense magazine as well as the website focused on a variety of French and allied defense issues.
We will publish all three articles, translated from the original French and slightly edited.
Across allied militaries, the debate about artificial intelligence has moved very quickly from the speculative to the concrete. The question is no longer whether forces will use AI, but how deeply AI will sit inside the command relationship itself.
Conversational artificial intelligenc, systems built on large language models that interact in natural language, now stands at the center of that debate. These tools promise faster sense‑making, richer wargaming, and more agile force preparation in a battlespace saturated with data.
Yet they also pose a fundamental question for mission command and intent‑based leadership: can militaries embrace conversational AI without hollowing out the autonomy of subordinate commanders and the primacy of human judgment?
By focusing on training as the decisive arena, the series avoids two common traps. The first is technological determination or the idea that the adoption of AI is a one‑way street that commanders can only react to. The second is a purely technical framing that treats conversational AI as just another decision‑support widget. Instead, Delaporte’s work situates these tools in the evolving architecture of multi‑domain operations and, more importantly, in the evolving culture of command.
From AI in systems to AI in the command relationship
For more than a decade, artificial intelligence has been integrated into military systems in relatively discrete ways: image recognition on ISR feeds, predictive maintenance algorithms, targeting support for missile defense, and so on. In such use cases, AI operates largely “under the hood,” augmenting specific functions but not directly mediating the relationship between commander, staff, and subordinate units. What is new about conversational AI is that it steps out of the background and into the cognitive loop of human decision‑makers.
Delaporte’s first article examines this shift through the notion of a “military cognitive assistant.” Built on large language models trained on vast corpora of text and operational data, conversational agents can ingest and query operational plans, doctrine, ISR summaries, and logistics databases, and then present their synthesis in the idiom of staff work: briefings, options analysis, risk matrices, and so forth. Rather than forcing officers to learn each new system’s interface and query language, these tools allow them to ask questions in natural language and receive structured responses in near real time.
In principle, this should accelerate the classic OODA loop, observe, orient, decide, act. Observation is augmented by AI‑assisted fusion of sensor data; orientation is accelerated by rapid generation of multiple courses of action; decision is supported by comparative analysis of those options; and action can be more closely synchronized across domains because the underlying data architecture is integrated from the outset. The U.S. experience with platforms such as the Maven Smart System and Palantir’s Artificial Intelligence Platform illustrates how quickly such an integrated decision‑support environment can scale once the underlying data plumbing is in place.
However, from the perspective of mission command and Commandement par intention (CPI), acceleration is not an unalloyed good. Mission command rests on a particular moral and organizational philosophy: the commander expresses intent, defines the main effort and constraints, and then grants subordinates the freedom and responsibilit to exercise initiative as situations evolve. The value of this model lies not only in speed, but in resilience and adaptability: decisions are made close to the fight by leaders who understand both the commander’s purpose and the local reality.
Conversational AI can strengthen this model if it helps subordinate leaders understand the situation more quickly, explore options more thoroughly, and communicate their reasoning more precisely within the commander’s intent. But it can also begin to reshape how intent is formulated and interpreted. If staff work is increasingly mediated by an AI that comes with its own embedded assumptions, about what counts as relevant data, what risk looks like, what “typical” courses of action might be, there is a danger that human intent evolves to fit the logic of the system rather than the other way around. The first article in this series explores that tension in detail.
From “kill chains” to “kill webs”: conversational AI in the data architecture
The second article widens the lens from the decision loop of individual commanders to the broader shift from linear “kill chains” to networked “kill webs” in multi‑domain and multi‑environment operations. Western militaries have invested heavily in architectures that link sensors, effectors, and command systems across land, air, sea, space, cyberspace, and the information domain. Instead of discrete, service‑centric chains of find, fix, track, target, engage, assess, the emerging model is a mesh of actors and nodes that can be composed and recomposed dynamically as conditions change.
In this environment, data becomes the central organizing principle of operations. Collection, transport, processing, and dissemination of data are no longer supporting functions; they are the backbone of combat power. C5ISR, command, control, communications, computers, combat systems, intelligence, surveillance, and reconnaissance—evolves into a single, tightly integrated ecosystem. As Delaporte notes, this raises fundamental questions about sovereignty and control: who governs the orchestration platforms that weave together sensors, shooters, and models will hold a decisive lever in modern warfare.
Conversational AI sits at the interface between this complex technical architecture and the human beings responsible for using it. At one level, it is simply an ergonomic improvement: instead of navigating multiple portals and dashboards, a staff officer can ask a single agent to provide a fused picture, propose options, or simulate adversary reactions. But at another level, it changes how staffs learn to think about multi‑domain operations in the first place. When officers iteratively query a conversational system that “understands” the relationships between domains, units, and effects encoded in the data, they begin to internalize that networked logic themselves.
This is where training comes into its own. In a traditional staff exercise, the number of scenarios and options explored is constrained by time, manpower, and the limits of the simulation environment. With conversational AI, a planning team can explore a much broader space of possibilities, multiple branches and sequels, different logistical concepts, alternative uses of cyber or space effects—within the same time window. The AI does not replace the staff: it amplifies their ability to interrogate their own assumptions and to see second‑ and third‑order effects.
In theory, this makes forces more agile and better prepared for the demands of multi‑domain operations. But the very richness of this environment also creates a risk of over‑reliance. If commanders and staffs become accustomed to having a conversational assistant always present to pull data, propose options, and even red‑team their plans, how will they respond when that assistant is degraded or denied? How will they maintain a feel for the fight when the user interface shields them from the friction and ambiguity that characterize real combat? The second article explores both the promise and the limits of conversational AI as a data interface in this networked battlespace.
Training as the decisive arena: cognitive resilience versus cognitive dependence
The most original contribution of Delaporte’s work, and the central theme of this series, is the insistence that the real test of conversational AI will be in operational preparation and training rather than in operations alone. Peacetime and pre‑crisis training environments provide a relatively controlled space in which militaries can experiment with these tools, refine doctrine, and, crucially, build habits of mind that either reinforce or erode mission command.
On the positive side, conversational AI allows training centers and schools to run more iterations at lower cost. Wargames and command‑post exercises can be designed with adaptive adversaries that respond dynamically to blue‑force actions. Tactical leaders can practice decision‑making in scenarios that span multiple domains and levels of war. Logistics officers can “feel” the consequences of different sustainment choices before they are locked into real‑world plans. The same infrastructure that supports “Fight Tonight” readiness, up‑to‑date data, realistic models, integrated simulation can thus be used to cultivate a generation of commanders and staffs who are comfortable operating in complex, data‑rich environments.
But the series also warns of a mirror image: cognitive dependence. If training relies too heavily on AI‑mediated scenarios, and if success in exercises is consistently associated with following machine‑generated recommendations, officers may internalize the wrong lesson. Instead of seeing AI as a tool to support their judgment, they may begin to see their judgment as a tool to validate AI. Over time, this can erode the confidence and creativity that mission command requires and can weaken the willingness to deviate from “the model” when reality demands it.
To counter this, allied forces are experimenting with a set of safeguards that the third article examines in depth. One is the deliberate design of degraded‑environment exercises in which digital systems are partially or completely removed, forcing commanders to re‑learn how to operate on the basis of incomplete information, improvisation, and trust in subordinate initiative.
Another is the push for more explainable AI: systems that can not only recommend a course of action but also expose the data and reasoning behind that recommendation in a way that human operators can interrogate.
A third is the development of “AI literacy” curricula at all levels of professional military education, aimed at giving officers the conceptual tools to understand how these systems work, where they are strong, and where they are brittle.
Underlying these efforts is a larger concept that Delaporte terms cognitive resilience: the capacity of commanders and forces to maintain sound decision‑making under conditions of technological flux and uncertainty. Cognitive resilience does not mean rejecting AI or attempting to preserve an idealized analog past. It means building a culture in which leaders see AI as one input among many, to be weighed, questioned, and, when necessary, discarded. It also means accepting that adversaries will target the digital backbone of allied operations and that the ability to “fight unplugged” will remain a decisive marker of military professionalism.
A doctrinal, not just technical, debate
Taken together, the three articles in this series make a simple but consequential claim. Conversational AI will not automatically strengthen intent‑based command, nor will it automatically destroy it. The outcome will depend on doctrinal choices and, above all, on how militaries use training to shape the habits and expectations of their leaders. If conversational AI is integrated as a crutch, it will erode mission command. If it is integrated as a demanding tool, one that must be challenged, validated, and sometimes ignored—it can help renew mission command for the digital age.
For defense.info, this series is offered as an invitation to shift the conversation away from slogans about “AI‑enabled warfighting” toward a more sober discussion of command, responsibility, and education. The integration of conversational AI into force preparation is not just a matter of procurement or software engineering; it is a test of whether Western militaries can adapt their command cultures without abandoning the principles that have underwritten their operational effectiveness for decades.
The articles that follow will delve into the details: the architecture of cognitive assistants, the role of conversational interfaces in multi‑domain C5ISR, the concrete design of AI‑rich and AI‑denied exercises, and the emerging debates about sovereignty over orchestration platforms and models. This introductory essay’s aim has been to frame the stakes. At the heart of the matter lies a single question: in an age of increasingly intelligent machines, who or what will truly command?
Note: We will link each article to the original French article on the website, Opérationnels SLDS : Soutien Logistique Défense Sécurité,

