Perspectives on an ‘Artificial Intelligence, Robotics and the Future of War’ Seminar

By Kate Yazley

On 24 October 2018, the Australian Defence College (ADC) hosted a Profession of Arms Seminar entitled ‘Artificial Intelligence, Robotics and the Future of War’. The seminar was well attended by a variety of personnel across all three services, ranks and other government departments. Kate Yaxley and Andrew Fisher were two of those attendees and have generously offered their perspective on the Seminar. They address why they are interested in learning more about AI, as well as why it is important for military professionals to be reflecting on AI and the future of war.This week we will hear from Kate Yaxley.

On 24 October 2018, I had the opportunity to attend the 2018 Profession of Arms Seminar on Artificial Intelligence (AI), robotics and the future of warfare. The seminar focused primarily on AI, which can be classified into two types; narrow and general. Narrow AI refers to the implementation of AI to complete specific tasks, while general AI refers to systems which are able to achieve tasks autonomously. My interest in attending was to gain further insight into general AI and the role it will play in the profession of arms, with the issue of trust my main focus.

A common theme among Dr Michael Horowitz, Dr Frank Hoffman and Ms Elsa B Kania was the implementation of both narrow and general AI, and how it will be influenced by the culture and political views of the society developing the technology. Dr Horowitz presented this well, by associating a democratic society as having trust in people, which would likely translate to AI that is integrated into the society.

In contrast, an autocratic society does not display trust in society and would therefore likely develop AI to control the population further, or exert force using AI to expand this control. Of note, this comparison is not unlike Michael Walzer’s premise regarding how militaries apply measured violence in a manner reflective of the moral views of the society they represent (Walzer, 1977).

While the prospect of an autocratic society developing killer bots was certainly discussed by both Dr Hoffman and Ms Kania, it should not be considered a motivator for engaging in a second Cold War. The development of both narrow and general AI should be done in a thoughtful and considered way to ensure the next evolution of technology works in conjunction with society and not against. Further, to limit the application of general AI until cyberworthiness is guaranteed, limits the ability to build trust in these systems.

As Air Commodore Anthony Forestier discussed, development of general AI could be considered the development of a synthetic life form, capable of cognitive thought and contribution to society, yet he also challenged whether humanity should pursue such ambitions. Through the considered development of AI and robotics, specifically, the application of professional and social ethics to innovation, technological development and the pursuit of knowledge, the uncertainty and fear surrounding an evolving revolution may be reduced.

The successful implementation of AI within the profession of arms hinges upon the ability of military professionals, as well as society, to trust these emerging systems and for their implementation to occur in a professional and ethical way. As highlighted by Prof Michael Evans, the philosophical implication of AI and robotics is not trivial, and to ignore any progress would be detrimental to society.

Another profession working to understand and forge a cooperative relationship with AI and robotics is law. Mr Morry Bailes, President of the Law Council Australia, discussed the implications of AI for the legal profession. Like military professionals, lawyers are also seeking to forge a symbiotic relationship with AI by improving access and delivery of legal advice to society. While Mr Bailes did not specifically address general AI, he did highlight the requirement to promote a human to human relationship throughout the legal process, augmented with AI.

While not specifically addressed at this seminar, the integration of AI into the human workforce, or human-machine teaming, was presented earlier this year by Major General Mick Ryan at the Contemporary Security Challenges Seminar on 1 March 2018.

This is important to highlight as this concept brings with it numerous benefits to the workforce, including improving military intelligence through the introduction of AI to perform big data analysis, therefore freeing up the human workforce to contribute in other ways (Ryan, 2018). While narrow AI algorithms continue to be introduced into the military, by introducing general AI, it may be possible to reduce the number of human casualties in the battlespace.

In summary, the Seminar discussed the many challenges and opportunities AI and robotics present to the Profession of Arms. What was lacking, however, was how to foster trust. The defensive attitude towards AI and robotics, particularly general AI, needs to be further addressed in order to enable greater innovation, technological advancement and knowledge.

To achieve this, society and military professionals need to build trust through understanding and research. Our forces are already implementing narrow AI, but we are not yet fostering a relationship of trust with technology, which may delay the evolution of human-machine teaming.

By continuing to display a narrow attitude towards AI and robotics, we leave ourselves vulnerable to an adversary who readily embraces advances in technology and implements a force augmented with both narrow and general AI before we do.

If you would like to know more about AI, Robotic and the Future of War, recordings from the event are available here.

Flight Lieutenant Kate Yaxley is an officer in the Royal Australian Air Force. The opinions expressed are hers alone and do not reflect those of the Royal Australian Air Force, the Australian Defence Force, or the Australian Government.

This article was published by Central Blue on November 18, 2018.