Perspectives on an ‘Artificial Intelligence, Robotics and the Future of War’ Seminar (2)

12/17/2018
By Andrew Fisher

On 24 October 2018, the Australian Defence College (ADC) hosted a Profession of Arms Seminar entitled ‘Artificial Intelligence, Robotics and the Future of War’. The seminar was well attended by a variety of personnel across all three services, ranks and other government departments.

FLTLT Kate Yaxley and SQNLDR Andrew Fisher were two of those attendees and have generously offered their perspective on the Seminar.

They address why they are interested in learning more about AI, as well as why it is important for military professionals to be reflecting on AI and the future of war.

This assessment was provided by SQNLDR Andrew Fisher.

The joint professional military education profession of arms seminar Artificial Intelligence, Robotics and the Future of War held on 24 October 2018 provided an opportunity to hear the perspectives of three US-based academics – Dr Michael Horowitz, Dr Frank Hoffman and Ms Elsa Kania – on the impact of artificial intelligence (AI) on future military operations. The afternoon session involved short presentations from a panel including Mr Morrie Bailes (President of the Law Council of Australia), Air Commodore Tony Forestier and Professor Michael Evans, followed by a question and answer session. In sharing my perspective, I intend to discuss some key points and themes that emerged through a people-based lens and present the ‘so-what’ for the air and joint forces of today and tomorrow.

The first presenter, Dr Horowitz, argued that AI and robotics are enabling technologies rather than weapons. They are technologies made by people for people, and therefore their application is subject to the frailties, idiosyncrasies and biases that people possess. This is significant when outing the drivers and motivators for development of these technologies. Dr Horowitz outlined that for smaller democracies (Israel as the prime example) these technologies enable smaller countries to do more with less; a view that seems seductive to a ‘middle-power’ such as Australia.

Alternatively, these technologies may provide a disproportionate capability to our potential adversaries – the only thing that differs in this instance is the intent of the actor. As such, the ADF must seek to understand our potential adversaries’ intent for AI to counter it. Australia needs to ensure that strategic intelligence analysis takes into account these drivers and motivators to ensure appropriate strategy can be developed in response.

The question of motivation and intent was further built upon by Air Commodore Forestier when he highlighted that in the process of developing a strategy to address AI, that Australians are not the norm; that we need to understand Australians are “WEIRD”. We are western, educated, rich and consequently possess an inherent bias that comes with that. It behoves Australia to construct a strong strategy, built on deep understanding of other people, their cultures and their strategic viewpoints.

Dr Horowitz also addressed a second point in that whoever leads (and potentially wins) the AI race needs to dramatically restructure their doctrine (thinking), training and force structure to utilise AI best. It is the human condition to be naturally resistant to change, so depending on generational factors, senior leaders may struggle to enact the required change to achieve a competitive advantage. This is evident within the ADF when considering the introduction of space and cyber capabilities, and the high level of organisational resistance to fundamentally restructuring our force. It is vital that the ADF ensure we have agile-minded people to provide intellectual leadership in the coming decades to take advantage of technological advances.

One of the best means by which we have to understand future technology and utilisation in the profession of arms is through the work of science and speculative fiction. This was evident throughout the seminar with many presenters seeking to use popular fiction and film as a basis to communicate complicated concepts and technology to the audience. With references to I, Robot, The Terminator and Minority Report, the more mundane applications of AI (such as the commander’s decision support tools) become overshadowed.

A number of presenters suggested that it would be much more likely for decision support tools to be used in the near-term in the version of AI that the military adopts. While imagination is important for long-term strategic thinking, the pragmatic application of technology to assist people doing their day to day roles is likely to be a more valuable focus.

Dr Horowitz further described the major changes in the drivers behind technological innovation. Dr Horowitz traced this history from a point in the 20th century where the military was a key driver, to today, where we see commercial enterprise as the primary driver of technological innovation. What this means for the military is that the proliferation of knowledge and technology is harder to control than ever before. This reality is a military security professional’s worst nightmare; industry and academia developing technology without the controls of military security. The days of being able to lock down cutting-edge technology for military application may have ended.

Such a prospect poses important questions for how the military acquires technology-based capability. How can we ensure that the technology we need isn’t compromised from the outset noting that the imperative for technological development is commercial and not military? How do we ensure that the incredibly smart people in our research institutions care about sovereign capability? Government initiatives such as the Defence Innovation Hub, which was established to foster industry and academia, present a number of challenges to conventional security mechanisms.

As Elsa Kania indicated, countries such as China are tackling this challenge by creating specific mechanisms and institutions to integrate and coordinate sovereign research and development across academia, industry and the military. Australia needs to catch up and to wage a battle for complete supply chain assurance, which in itself will need to be enabled by AI.

Professor Michael Evans reminded the audience of the responsibilities that the profession of arms attracts. Through membership within the profession of arms, people are given legitimacy in their application of lethal force. In return, there is an unlimited liability that may require the sacrifice of one’s life. The impact of AI on the concept of unlimited liability is already being felt in Air Forces around the world with the proliferation of unmanned aerial systems. Defence professionals are gradually being removed from positions of risk while still being required to apply lethal force utilising AI-enabled weapons and systems.

Professor Evans posed the question of whether this will lead to a moral deskilling of the profession. A question ADF personnel must consider is whether we continue to have the same moral obligation to look after our people without this unlimited liability?

Mr Morrie Bailes, a lawyer by trade, contributed a valuable perspective in his presentation. As an outsider to Defence, he raised important considerations that will impact Rules of Engagement (RoE) in future conflicts.

For instance, when AI-enabled capabilities are providing a commander with a critical piece of information enabling them to decide if RoE has been met before the application of lethal force, how much will they know about the algorithms that have produced that decision?

Should a commander or a legal officer have an implicit understanding of the ‘thinking’ behind the technology if they will be approving the application of lethal force?

As Dr Hoffman pointed out, AI–enabled capabilities that promise to be delivered in the 7th military revolution will be purely rational and calculated.

If an AI-enabled sensor provides positive identification of an enemy combatant, how much understanding will the commander have of the rationality and calculation behind a decision recommendation? The culpability will remain with a commander, a person, not the technology.

If you would like to know more about AI, Robotic and the Future of War, recordings from the event are available here.

Squadron Leader Andrew Fisher is an officer in the Royal Australian Air Force. The opinions expressed are his alone and do not reflect those of the Royal Australian Air Force, the Australian Defence Force, or the Australian Government.

This article was first published by Central Blue on November 25, 2018.