Resilient Communications in Contested Environments

By AIRCDRE Jason Begley

What does manoeuvre in the cyber domain look like? And how is it critical for future warfighting concepts? In this address by TCB’s own AIRCDRE Jason Begley (Director General Joint C4) at the recent Sir Richard Williams Foundation Conference on Enhancing the Lethality and Survivability of the Integrated Force, he unpacks how ensuring information communication resilience is essential for freedom of action.


It’s the doctrinal foundations of Australian Military Power across all five of our warfighting domains. That shouldn’t come as a surprise to this forum, even if you don’t spend your nights curled up with a glass of red reading doctrine like I do. So whenever we talk about Defence capabilities and concepts of any kind, we need to be doing it through the lens of how they will assure our ability to manoeuvre.

Because if we’re not, then a) we’re doing it wrong, and b) we’re not really in a position to achieve a consistent understanding of what we mean when we say resilience.

Let’s take a closer look at manoeuvre. This is how our doctrine defines it. And within that, there’s some key phrases worth noting. Position of advantage. Series of actions orchestrated to a single purpose. And for the purposes of my topic today, those last few words… protecting friendly vulnerabilities.

We also need to understand the way the doctrine defines the relationship between manoeuvre and the five warfighting domains. It makes manoeuvre central by defining the domains as, “a critical manoeuvre space whose access or control is vital to the freedom of action and superiority required by the mission.”

Freedom of action. Keep that phrase in mind as we continue.

None of this should be news to this audience because manoeuvre’s been around for a long time. Coordinating your assets to mass your strengths to deliver effects against an adversary’s assessed vulnerabilities has always been a part of warfare. This was especially true of smaller forces that couldn’t rely on the brute force of attrition in the battlespace, and so needed an asymmetric advantage to prevail.

Manoeuvre’s also a concept that has leveraged technology throughout history, much of which we now take for granted in our everyday lives.

On land, manoeuvre was greatly improved by the wheel and internal combustion engines.

In the maritime domain, we’ve moved from ships and sail to carrier battle groups and submarines. The latter of those has obvious benefits in terms of asymmetric advantage through its ability to constrain an adversary’s freedom of action simply through its existence.

Meanwhile, in the air and space domains, technology set us free from the shackles of gravity, giving us reach, perspective, and the other characteristic advantages of air and space power with which you’re all too familiar.

So let’s take a look at technology and its relationship with manoeuvre in the cyber domain.

I often find when I talk to people about the cyber domain people’s minds immediately leap to cyber warfare operations, particularly offensive effects. Unfortunately for you all, that’s in ASD’s lane not mine, so that’s something you’ll need to ask someone who works there. It’s also not the focus of what we need to get our heads around today.

About now you’re probably sick of me banging on about doctrine. But if we’re going to have a common and consistent understanding of something as complex as the cyber domain, doctrine has to be our go-to reference point.

So let me your attention to two key points you need to appreciate when it comes to manoeuvre in the cyber domain. First, look closely at our definition of cyber power. It doesn’t say effects, it says activities. Activities in and through – bringing us back to assuring our freedom of action in the cyber domain just the same as we would in the physical domains.

But there are some unique differences between the cyber domain and the others. Sure, it has some physical characteristics and constraints – 1s and 0s need a medium to move through, whether it’s through hard connections or the Electromagnetic Spectrum. And both of those have to live with the limitations imposed by the laws of physics. But as a terrain that we intend to operate in and through, we don’t have the same degree of geographic constraints.

This brings me to the second point. The cyber domain is one that we create ourselves. We’ve built radios, phones and networks to manoeuvre information through the domain, and we’ve always done it in a way that tries to gain us an advantage, even when we know the domain will be contested.

For example, we secure our communications through encryption and waveforms to limit their ability to be intercepted, geo-located, disrupted or exploited by adversaries. Meanwhile, we also keep finding new ways to produce more bandwidth or compress data so that we can move information around a global theatre to meet our needs, despite geography. We can build and manipulate this terrain like no other, whereas there’s no easy way to move a tank into a useful position inside an A2AD bubble.

But how do we visualise manoeuvre in the cyber domain? Here’s a generic OV-1 Googled from the web. Modern Defence Forces are full of them, but no matter where you’re from, they all share four common design elements. The first three are obvious – sensors, deciders and effectors. And those basic building blocks are the lens through which our ADF’s C4ISR Design folk in Force Integration Division see the world.

But it’s the fourth one, normally represented by the ubiquitous cloud or lightning bolt, that we’re interested in. This is the connective tissue of the cyber domain through which information must flow. Because without it, the coordinated and synchronised objective of manoeuvre simply isn’t possible.

Realistically, a sensor that can’t disseminate its intelligence product is functionally irrelevant. A decider with no access to that data lacks the situational awareness they need to make informed decisions. And so their ability to affect command is now significantly degraded, and the synchronisation of effects we need to support Joint, Coalition and multi-agency manoeuvre simply can’t happen. Meanwhile, the effector’s ability to act now faces a greater risk of collateral effects and fratricide, because their original tasking may no longer be current, and their ability to act is now limited to their span of mission command and the battlespace intelligence and operational context they can derive from organic sensors.

So assuring our freedom of action in the cyber domain, the ability to move information where, when and to whom we need it, is central to any form of Joint, Coalition or Multi-domain operation.

So clearly, resilience is critical to warfighting of any form. But for every new effort we make to terraform the cyber domain to our advantage, our adversary is looking for ways to disrupt or deny it. Most of us grew up with fairly rudimentary PACE plans, but these simply aren’t going to cut it in a conflict whose speed is defined by the pace at which data flows from sensor to decider to effector.

This has given rise to a range of concepts, like mosaic warfare, Joint All Domain Command and Control, Overmatch, Convergence and Kill Webs. Their differences are minor because they all stem from common design DNA – meshed networking to assure maximum connectivity from sensor to decider to effector. The goal, every sensor, best shooter.

Now that’s more easily said than done, because if the conflict is going to happen at the speed of information flow, I can’t afford the time lag of operators switching settings between bearers as they implement PACE plans. Because if that’s the difference between winning and losing, automation will beat me every time.

My web of networks needs to be able to constantly scan all of its strands, both hard-wired and EMS, to pass information via the most expeditious path. In the perfect world, my operator is sitting in their cockpit or ground station, and the actual bearer over which they transmit and receive information would be invisible to them.

Now this all briefs well, but we need to pull this thread a little to be sure we understand what it means in terms of cost. And I don’t mean dollars. Picture a day in the life of a piece of data based on this image of a future conflict.

My data’s born in a sensor, passed through networks to a ship via SATCOM, then from there to the jet via Link-16, at which it and the ship both pickle off net-enabled weapons for a synchronised strike.

Sounds simple, right?

Well, let’s start with Link 16. Despite what many of my vintage believe, it’s not the Link-11 they grew up with. Load a crypto box, dial in the freq from the OPTASK Link, initialise and boom, you’re in the net and all sharing the same information. That’s history. These are sophisticated networks for which every platform is profiled based on its data needs, classification, outputs, and so on. Because those determine how often it gets a slice of the network action. A sensor passing data to a network-enabled weapon clearly needs more access so it can provide continuous updates, than a tanker that’s just keen for some battlespace SA.

This requires these modern networks to be engineered, and their operators and supporting elements to be far better trained than in years gone by. It also requires facilities for network validation and testing of networks to be appropriately equipped and accredited. None of that comes cheap.

Let’s also talk about the network concepts themselves, because we know our future fight won’t be one where we go it alone. The future is one in which data is the centre of things, and need-to-share is the driving force. And for anyone who’s enjoyed the NOFORN experience, achieving that sharing can be both technically and culturally hard to achieve.

The machine speed conflict of the future means we have to achieve that same pace of information manoeuvre. Doing so requires us to pivot to data-centricity. By properly managing and tagging data with its classification, releasability and other meta-characteristics, I can share it more freely.

Sounds great in principle, right?

But it also means I need to change the way my networks are designed. Because for that data to be shared, both on the network and between networks, without the need for cross-domain gateways, translators and other denoodlers that introduce lag in my information flow, I need the network to be truly open in design, not one built to a specific level of classification or releasability.

In this construct, my individual credentials, nationality, security clearance and physical location on the network determines what I can and cannot see. On the same network, the RAF officer next to me will see only the information they are meant to – some more, some less than me. And both our pictures and available information will be very different to the Japanese officer across from us.

This is all impressive stuff. But for every strand I add to my web to increase my communications resilience and manoeuvrability in the cyber domain, I also create another attack surface for the adversary. So while greater resilience might solve my tactical DDIL issues, it might simultaneously generate a strategic hole of Optus proportions.

So we need to think carefully about the cost of ownership that assured resilience brings. Especially for networks and technologies that have significant overheads for network engineering, integration and test labs that may go up to TS levels. For every strand we buy, we need to be able to assure it to an acceptable risk level, and as we are all discovering, cyberworthiness doesn’t come cheap in terms of workforce.

So how much is enough? And how much is more than we can assure?

If you think this is a vexed issue for us as a Defence Force, think about it from a vendor standpoint, especially those that deal with C2 and battle management systems.

How quickly will they be able to pivot from open architectures that are still network-based to the data-centric future that meets our needs for rapid information flow.

What might it cost them to change their vast suites of legacy applications to tag every track, based on how it was collected and processed and who by, with the metadata required to achieve data-centricity?

Picture the challenge for vendors that use their own proprietary data standards, albeit within an open network design. Because the future is one where data needs to flow freely, without delay, from a TS network to one where it can be shared directly with the PNGDF.

How we get there is in itself a challenge. In the past our single services have chosen their own adventure in terms of the communications systems and networks they’ve acquired.

That’s made the way forward much more complicated. We need the communications, and we need them to be resilient. And if you listen to the media, we’re on a tight timeline. This means some hard conversations about risk. Risk to the force in being against risk to the future force. Risk in physical domains against risk to resilient communications. The way forward requires deliberate choices, an objective “whole of Defence” rather than single-Service perspective, and discipline.

If we can’t achieve that, the biggest DDIL risk to the ADF will continue to one that’s self-inflicted.

This article was published by Central Blue on October 16, 2022.