On AI Ethics and Defence Transformation
Defence is enraptured with its fascination with AI. Future capability is ‘AI-enabled’. Data is the new oil. Human-machine teams win the Champions League. And with this pixelated, silver Bit bullet, we achieve information advantage, dominance, superiority (pick your noun). Apologies for sounding flippant- it’s easy to make light of the clichés when you spend your lockdown reading them over and over again. Because, of course, AI will have a transformative impact on military operations and the wider defence enterprise. And, indeed, it is right that concepts like Multi-Domain Integration are grounded in the promise of emerging technologies like machine learning and the identification of their novel applications to enhance new or extant capabilities and tactics, techniques and procedures.
The AI attraction has led to a swathe of new organisations, leadership roles, roadmaps, strategies, pathfinders, hubs, networks, challenges and competitions. It has sparked a period of intense wooing of ‘nerds’ and tech industry ‘outsiders’, SMEs and VCs - relationships of which represent a desired cultural shift in defence thinking, away from the stuffy, traditional industrial modes of technology production and sustainment towards rapid, iterative software development. Say goodbye to personal ads and OK Cupid - this is courtship for the Hinge Age.
But amongst the excitement for transformation, there’s a niggling red flag. The Debbie Downer of Defence AI: ethics. How might one get a grasp of a subject we have only just scratched the surface of? Well, the literature on the subject is overwhelming and sometimes frustrating. Some of you might not be surprised to know that much of the public conversation on the ethics of military AI is dominated either by imprecise buzzwords, pop culture references (think Terminator, not WALL-E) or dystopian discourses of the Orwellian/H G Wellian ilk.
It is vital that we continue to study the threats posed by Lethal Autonomous Weapon Systems and other sinister uses of AI-enabled tech, and there are obvious benefits to sci-fi or futures thinking inside the military. But what of the other, seemingly more innocuous capabilities (take, for example, war-gaming or a back-end chatbot) or those that not-so-neatly fall into the category of ‘grey area’ (facial recognition, cyber defence/offense, behavioural predictive analytics)? There appears to be less public mulling of these topics in a defence-specific context or how this will impact private sector AI projects. On the other side, how can Defence shape the debate when they themselves are likely to try to shoehorn military lingo into ethical and socio-technological discourse (how effective is the Boydian ‘on/in the loop’ concept when trying to understand the complex scale of responsibility and accountability in AI defence ethics?).
Meanwhile, briefings and strategies on AI ethics and safety emerging from the public sector, and specifically Defence, appear to be immature or too general. This is partly, I would argue, because of Defence’s tendency to present AI as a panacea in the face of the plethora of challenges and threats confronting military organisations. The result is a discussion that isn’t often grounded in actual use cases of successful, incremental capability enhancement or more tightly bound problem statements where AI applications and their associated ethical issues can be explored in a timely and cost-effective manner. Instead, we get bogged down in a dizzyingly vast array of potential possibilities that may or may not be realised be in the future, or we take an approach that is the very opposite of design or agile thinking – starting with the solution (AI) and working backwards.
Additionally, meaningful progress is hampered by Defence and National Security’s dearth of foundational data science knowledge and technical understanding. This could result in the premature acquisition and deployment of unsafe AI-enabled systems with potentially disastrous, unethical consequences (for more on this topic I highly recommend reading the written evidence submitted by the Cambridge Centre for the Study of Existential Risk).
Consequently, Defence is keen to invite industry to fill this informational vacuum and inform decision-making as to the ethical development of AI. Indeed, private sector contributions make up a significant proportion of contemporary AI ethics literature (according to the study led Schiff, Biddle, Borenstein, and Laas, it’s around a quarter of 80 major publications between 2016-2020). However, much of this emanates from the big tech players like IBM, Microsoft and Google. Whilst these corporations have helped to shape the debate on the ethical development of AI-enabled technologies – they, for example, worked closely with the US National Security Council on AI on their own AI ethics policy – there appears to be a distinct lack of perspective coming from non-US corporations, SMEs and start-ups, or indeed developers themselves (except those involved in protests ala Google vs. Project Maven). How can we pursue representative policy when discourse is dominated by a concentrated core of multinational organisations?
Moreover, as ethics requirements start to seep into defence contracts or become the focus for competitions or increased investment, should we be concerned about corporate ‘ethics washing’ – where companies exaggerate their ethical practice or understanding – to gain the good PR? (Wait a second… am I ethics washing for Rowden right now?).
So, completely avoiding all the tricky ethical questions for military practitioners, this brings me to what the ethical focus of companies working in Defence and National Security should be. How do we develop this technology ethically? And how do we ensure it’s used safely and in the manner in which it was intended? What of the responsibility, or accountability, of the developer, versus the end user? How much influence should industry be afforded in the development of new defence-specific ethics guidelines, and how should government incentivise ethical behaviours? What do ‘accountability’ and ‘transparency’ really mean and how might government-enforced ethical requirements impact future defence contracts? Will industry be subject to different levels of ethical oversight based on the capabilities they are developing? How will government and industry monitor, test and evaluate the ethical strengths or weakness of a capability to meet said requirements?
These are just some of the questions we’ve been mulling over during virtual coffee breaks. As the UK seeks to ‘double-down’ their investment in AI (see last week’s AI Roadmap) and integrate AI-enabled capabilities at scale to achieve its vision of Multi-Domain Integration, the formulation and implementation of ethics frameworks and regulations will need to accelerate too. These will impact our own internal practices, and we believe it’s our moral and professional responsibility to continuously educate ourselves on contemporary and not so contemporary (Plato, anyone?) ethical concepts and policies. “But our adversaries are outpacing us, so don’t let regulations slow things down”. This seems to be the underlying concern shared by some in the military and private sector. But regulations don’t necessarily have to rain on our AI-enabled parade. Yes, government will need to ensure that frameworks are suitably flexible to ensure that we can adapt as the technology matures, and, yes, guidance needs to be restrictive whilst allowing the private sector to innovate at pace. But AI investment, underpinned by effective AI ethics policies, also present an opportunity for defence procurement reform – another Defence Debbie Downer – and whole-of-enterprise transformation.
In the first instance, given the near impossibility of forecasting how AI technologies will develop beyond the next 10 years, Defence will need to focus on shorter term contracts for iterative development. This will be a far more cost-effective approach to technology development and procurement, and allows for more effective data preparation, testing and assurance to address ethical and safety challenges. Linked to this, the defence procurement process will need to adapt to the high level of modification and experimentation necessitated by AI. Applications will require continuous and rapid testing, tweaking and verification over their lifecycles to maintain their safety and relevance, and as such procurement contracts and regulations will need to evolve from their slow-moving, hardware-centric delivery modes.
Thus testing and evaluation systems for AI products, and good governance, becomes critical. The Joint Artificial Intelligence Center (JAIC) in the US has already developed its own policy of ‘devsecethops’ - a continuous development loop that collects user, ethical, and security requirements upfront and cycles it back to the developers: “I think T&E, test and evaluation,” explains Jane Pinelis, the JAIC's chief of testing, evaluation, and assessment, “will play an incredibly big role in ensuring that those [ethical] processes are followed. But it is a challenge for us, by the nature of ethics, those requirements are very qualitative and we have to translate them into something very objective, very quantifiable for each product”.
AI ethics policy could also usher in a new culture of transparent, collaborative responsibility and accountability in defence acquisition, from R&D through to product design, development, procurement and deployment. AI ‘Good Governance’ and ethics can be baked into contracts, covering design, data and algorithm pipelines whereby developers can play a role in educating user groups and programme managers as to the operational limits, safety and security aspects of their applications. Senior procurement stakeholders can also be incentivised to manage and reduce risk (and cost) through T&E and good data practices.
There are opportunities to start small and on less controversial projects to develop the necessary skills and culture as we spiral up. Take, for example, the UK’s new Defence Support Strategy. There is a significant AI component to Strategic Command’s upcoming logistics programme, primarily in the form of predictive maintenance, and the MOD could use it as a petri dish to trial new ethics oversight processes, as well as experimentation and assurance. Consequently, these projects could help to build trust and confidence in the methodology internally, and they can then move on to more complex ‘grey area’ projects involving, say, cyber or ISR.
This trust and confidence can also be shared and strengthened between government and the private sector, particularly with technology companies outside of the traditional defence enterprise. A clearly communicated, tried-and-tested ethics framework will not only reassure existing government and industry developers about the development and use of certain capabilities, it could also help to recruit and retain the next generation of talented, more diverse, engineers and coders. I am not advocating ‘ethics washing’ for the sake of a recruitment drive but, rather, emphasising how transparency and strong ethical codes of conduct are important to a the next generation workforce.
In fact, Rowden’s desire to strengthen its practices and internal understanding of ethical issues was one of the things that attracted me to join the company (that and the free Coke Zero). Don’t hate me because I’m a millennial snowflake.