Speech to AI Forum – Autonomous Weapons Systems

0
3

Source: New Zealand Government

AI Forum New Zealand, Auckland

Good evening and thank you so much for joining me this evening. I’d like to start with a thank you to the AI Forum Executive for getting this event off the ground and for all their work and support to date. The prospect of autonomous weapons systems is a very real one, and I see the number of people here tonight as an indication of the breadth of interest we have in New Zealand on this issue.

Two months ago, when the Government first kicked off work to develop policy on autonomous weapons systems, we knew we were going to need to draw on a broad range of expertise in New Zealand – and to do so very quickly. The AI Forum has been an exemplary partner in this regard and I look forward to continuing our engagement as we move ahead.

We have also recently released our wider disarmament strategy, and I’d like to take a minute to explain how this work relates to it. The first two aspects relate to traditional areas of focus for New Zealand, which remain just as relevant now – nuclear disarmament and non-proliferation, and strengthening international humanitarian law. The third is shaping the future, which recognises the need to address emerging areas of concern – in particular, outer space and autonomous weapons.

New Zealand has a strong record in taking a lead on disarmament matters, and they represent a key pillar of not only our foreign policy, but also our national identity, given the immense pride we take in our status as a nuclear-free country. However as a small state, with ultimately limited resources to dedicate to this cause, we need to be strategic about where we focus our efforts. The fact that autonomous weapons feature prominently in our strategy should give you all an idea of how important I believe taking action now is – it is not an exaggeration to say that global peace and security is at stake if we are unable or unwilling to.

I have spoken on issues relating to autonomous weapons systems on a number of occasions now, and think it is useful to be clear from the outset on exactly what our policy process is focused on. I hope that, in answering three key questions, I can provide you with a good understanding of what we are – and are not – concerned about and of what those involved in the Artificial Intelligence industry might want or need to feed into the policy process.

These questions are:

  • What is an autonomous weapon system?
  • Why are we concerned about them?
  • What can New Zealand do – nationally and multilaterally – to address those concerns?

So, what is an autonomous weapon system?

It is important to recognise there is no universally agreed definition of autonomous weapons systems. In fact there are a number of competing definitions in circulation, with some focusing on “fully autonomous weapons systems” and others speaking about “lethal autonomous weapons systems”.  The absence of a specific definition at this stage is not a significant problem – rather it reflects the fact we are in many ways discussing weapons systems of the future, and in turn having the legitimate debate about exactly what systems are of concern.

But to help keep us focused, and to avoid straying into other important – but separate – issues such as drones or unmanned aerial vehicles – a working definition is useful. For this purpose, New Zealand tends to draw on the most widely-used definition, which focuses on weapons systems that feature autonomy in their critical functions. In other words – we are concerned with weapons that select and apply force to targets without human intervention.  After being activated by a person, such weapons fire themselves when triggered by their environment, not by the user – meaning the human user does not choose the specific target. These weapons may also have other autonomous features, for example for navigation purposes, but it is the nature of their target identification, selection and execution that is of primary concern.

There are already many weapons systems in operation today that have significant autonomous features. Our own navy frigates use the Phalanx close-in weapons system, which has been in use internationally since the 1970s, and which operates as a last line of defence against anti-ship missiles and other high-speed threats. But while the Phalanx is capable of autonomous search, detection, evaluation, tracking and engagement functions, it doesn’t define its own targeting criteria and can still be deactivated by humans.

Further along the spectrum, is South Korea’s SGR-A1, which is an autonomous sentry that guards the demilitarized zone separating North and South Korea and can autonomously identify and destroy targets. When an intruder in the DMZ is spotted, the sGR-A1 can issue verbal warnings and recognise certain surrender motions – such as if the target were to drop a weapon and raise their hands – but can also engage the target with a machine gun. Again, though, the SGR-A1 can function with a “human on the loop”, meaning that even if the system can autonomously select and engage the target, a nearby human operator can intervene to turn off the system if necessary.

You may also have seen media coverage in recent weeks of the STM Kargu-2 drone, which a recent report by the UN Panel of Experts on Libya describes as having “hunted down and remotely engaged” retreating soldiers in Libya. While the drone’s manufacturer has now confirmed it cannot autonomously select and attack targets, the saga is a stark illustration of what’s at stake in the debate around autonomous weapons technology, and how close we are to those systems becoming not only operational, but also highly proliferated.  The Kargu-2, for instance, can be carried by a lone solider and set up in 1 – 2 minutes – a terrifying proposition when you consider they systems could still, in theory, be automated by a state or group looking for the upper hand.

Why is this distinction important? Responding to this question draws on the answers we have to the second question I posed earlier – namely, why are we concerned about autonomous weapons systems?

Why are we concerned about autonomous weapons systems?

The prospect of weapons that can identify, select and attack targets without human control raises fundamental legal, ethical and strategic concerns.

There are serious doubts about whether such a weapons system can comply with the requirements of International Humanitarian Law. Such systems operate on software-based algorithms that are “taught” to take decisions on the basis of large training datasets to, for example, classify objects. This requires a sufficiently complex and robust data set to ensure that the AI or machine-learning system does not “learn” the wrong lesson. It is one level of complexity to have a dataset that enables accurate, predictable and reliable identification of a car or truck, but another altogether to be confident that a machine could discriminate between a combatant and a civilian, or be able to determine whether the action it is taking is proportionate (as is required under IHL). Similar questions arise around meeting the requirements of human rights and other law in the event of the domestic use of such a system by an oppressive government – for example against protestors.

It is already clear from public reactions to the prospect of autonomous weapons systems there is strong opposition to the selection and application of force against human targets without human intervention. This is relevant not just from a political perspective, but also legally. The “Maartens clause” is a provision of both the Hague Convention and the protocols to the Geneva Conventions and provides that, on matters where the law is silent, governments should look to the “dictates of public conscience” for guidance. Where acts or policies would “shock the public conscience”, they should be seen as contrary to the law and customs of war. It will come as no surprise that the prospect of killer robots is seen by many as triggering this clause.

Legal questions aside, autonomous weapons systems raise profound questions of ethics, accountability and justice.  At its most autonomous, this technology dehumanises people and introduces biases that could further bake-in systematic discrimination and the persecution of minorities. Of course the AI Forum of New Zealand is already on record as sharing these ethical concerns through your endorsement of the Lethal Autonomous Weapons Pledge.

Through that pledge, the Forum has agreed that “the decision to take a human life should never be delegated to a machine”. The pledge recognised “there is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable”.  It recalled the agreement of thousands of AI researchers that “by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems”.

Indeed, who would be responsible if an autonomous system decided to bomb a hospital or a school? What happens when these systems malfunction? How could victims and their families ever achieve justice, when the perpetrator is a black-box algorithm?

But these are not our only concerns. Beyond these serious legal and ethical concerns are considerations relating to national and strategic security. It is already clear that militaries around the world are pursuing the benefits of autonomous technology in their defence procurement and planning activities, and there are many signs that a new arms race is emerging. As the pledge recognised, “lethal autonomous weapons systems have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatizing and preventing such an arms race should be a high priority for national and global security.”

In the face of these concerns, the need for action is indisputable. New Zealand has been engaging in multilateral efforts on autonomous weapons systems for a number of years, but I am clear that we need to accelerate our work. We must respond to the call for urgency from the UN Secretary-General, the International Committee of the Red Cross, international civil society, as well as concerned roboticists, ethicists and military personnel.

Ongoing multilateral discussions on autonomous weapons systems have been and continue to be useful – they have helped identify the primary issues as well as the key stakeholders. But the absence of a decision point has meant discussions have in many ways become repetitive or circular, or have simply highlighted the different perspectives and interests in play. What is needed now, then, is for the international community to take the decisions needed to address the concerns we have about autonomous weapons systems. As has been the case for effective controls on every other type of weapon system of concern – landmines, cluster munitions, incendiary weapons, blinding lasers, nuclear, chemical and biological weapons – reaching these decisions will require negotiation.

So, turning to my final question – what can New Zealand do, nationally and internationally, to address these concerns?

To engage effectively on this issue at the global level New Zealand must be clear about where it stands nationally. We have learned enough from our multilateral engagement to date to understand the key issues, interests and concerns in play. That is not to say we know everything, or that we don’t still have much to learn from international processes. Rather, it is simply to say there is a time – and I think that time is upon us – where we need to decide, as a country, where we want to draw the line on autonomous weapons systems.

Making a good decision on this – one that is principled, comprehensive and implementable, and which reflects our national interests alongside those of the international community – requires a robust policy process. And so our focus for now is on engaging with the broadest possible range of stakeholders and interested partners in New Zealand, supplemented by our ongoing engagements offshore, to make sure we are taking into consideration all relevant issues and factors.

The Ministry of Foreign Affairs and Trade is leading this work, and I know Katy Donnelly from the Ministry’s disarmament team will give you a snapshot of that process in a few moments. But a key point, of course, is that in developing this policy, and working internationally to ensure its effectiveness, we need the input of all those who might be affected by it. And that, very clearly, includes the AI industry in New Zealand.

I know the Forum Executive has provided a number of questions for you to consider your contribution to the policy development process.

  • Could the technology you are developing be affected by regulation?
  • What degree of human control do you consider is required to be ethically acceptable?
  • How do we ensure technology developed in New Zealand can’t be used elsewhere for harm?
  • How do we protect innovation?

These, and the others put to you by the Forum Executive, are critical questions that go to the heart of our domestic policy process and our international engagement.

Any effective multilateral regulation of autonomous weapons systems will require countries that sign up to it to be able to enforce its provisions, and I have been active in taking this up with my ministerial counterparts overseas. This will be particularly challenging given the dual use nature of the autonomous technology involved. We will need to take great care to ensure any regulation on the use of such technology is understandable, and is able to be both implemented and enforced.

We need to reflect on the fact autonomous weapons systems may have civil applications and not just military ones. They could be used, for example, for confronting terrorists or other non-state actors, or for extraordinary policing situations and border control. Their regulation will therefore need to take into account not just compliance with the law of armed conflict but other important bodies of law including human rights law.

We know, too, that regulation in industry is prone to raise fears that it will have a chilling effect on opportunity, on innovation, and on business. Avoiding this is a key objective of our consultation process and I encourage you all to engage actively.

It is already clear from consultation to date that discussions on autonomous weapons systems inevitably raise broader issues relating to artificial intelligence and other aspects of autonomy in warfare. It is crucial to understand how these issues intersect with autonomous weapons systems. At the same time, however, it is also important to recognise that a policy on AWS is not – and cannot be – an answer to the many questions and challenges associated with those broader issues. For example, I recognise there may be many other examples of the need for rules, codes of conduct, technical standards or guidelines on the use of big data and of facial recognition programmes. These are issues that go well beyond my portfolio, and beyond the purview of our policy development on autonomous weapons systems.

All of these considerations are important issues that will need to be taken into account as we develop our national policy on autonomous weapons systems. But I don’t want to give the impression we think the only interests the New Zealand AI community have in the regulation of autonomous weapons are defensive. From what I can see, they go far beyond the need to avoid a chilling effect on business, or the interest in making sure regulation is clear and understandable.

In fact, the AI Forum has been a number of steps ahead of the New Zealand government on this issue for some years now. Through your endorsement of the pledge you have already called on governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. And, in the absence of such rules, you have pledged to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”. This leadership is welcome – not least because you represent a group of experts, entrepreneurs and others who truly understand the issues in play and have a better sense than anyone of the future we face if there are no effective controls on such weapons. 

It is also welcome because you can make such a positive contribution to our efforts to achieve these controls. From ensuring that regulation is accurate and future proof, to providing ideas on how to tackle some of the biggest challenges ahead – including compliance and verification – there are important roles that the AI community can fill in the articulation of New Zealand’s policy on autonomous weapons systems. Your expertise and experience – and of course your ability to translate complex technical issues into policy – will be invaluable as we move forward.

As the introduction to the Forum’s landmark research on “Artificial Intelligence: Shaping a Future New Zealand” stated in 2018, there is a role for the Forum’s research “to demystify this significant technological development and deliver a strong call to action to ensure that, as a nation, we are well positioned to address the impacts and opportunities that the adoption of Artificial Intelligence offers”.  This is true for AI in general, and in particular for the inclusion of AI in weapons systems.

I look forward to hearing from you this evening, and to your ongoing engagement in the policy process before us.

MIL OSI

Previous articleLocal Government – Porirua’s View Road properties sold
Next articleQuarantine-free travel pause with Victoria to be lifted tomorrow night