Source: Radio New Zealand
South Korean Foreign Minister Cho Tae-yul delivers a speech at the closing session of the Responsible AI in the Military Domain (REAIM) summit in Seoul on September 10, 2024. AFP / JUNG YEON-JE
New Zealand has not joined in the latest international call for responsible use of AI by the military, but has been taking part in the UN talks about autonomous weapons.
AI has been used in unprecedented ways in the war in Iran, for instance in drawing up hit lists and targeting missiles, according to overseas media reports.
Forbes has called it “the first AI war”.
Australia, Canada and the UK were among this country’s Five Eyes group partners that endorsed the non-binding call issued by the third summit on “responsible artificial intelligence in the military domain”.
The Ministry of Foreign Affairs said no one was sent to the summit in Spain in February, unlike the second summit in 2024 when the NZDF had someone there.
“Although we observe when resourcing allows, New Zealand is a not a member of REAIM,” MFAT said.
The US endorsed an earlier call from the 2024 summit of REAIM, a European government initiative.
The summits have been trying to nut out a blueprint for armies using AI but there remains no international law or legally-binding treaty that bans the use of lethal autonomous weapons.
Their calls to action have been described as “modest”.
The latest call said military AI “can and should” contribute to peace and security, for instance, by reducing exposure of military personnel and civilians to danger, and helping decisions to be faster and better.
But its risks had to be corralled within frameworks of international humanitarian and human rights law, it said.
In March, NZ permanent mission staff in Geneva took part in the UN talks on lethal autonomous weapons, MFAT said.
These revolved around work by a group of government experts on the conditions where autonomous weapons could be developed and used legally.
The March talks referred to a new report by a leading Swedish thinktank that said militaries must change their AI weapons buying practices to build into them political commitments to responsible use.
The Stockholm International Peace Research Institute said in the US the Pentagon had previously stressed that its flagship Replicator initiative – to build fleets of thousands of drones focused in the Indo-Pacific – was based on policies for ethical use of AI.
But it added, “the tension between acquisition speed and thorough legal, safety and ethical review remains unresolved in public documentation.”
More recently, US Secretary of War Pete Hegseth has hit the accelerator on emerging tech development, while at the same time deriding “stupid rules of engagement” aimed at reducing mistakes and civilian casualties.
The Stockholm study said militaries seeking speed were turning to commercial AI solutions rather than the traditional approach of ordering what they need, custom-made. This was leading to the fielding of “minimum viable capabilities” often without a whole lot of pre-testing.
“States may even knowingly accept governance trade-offs under acute security or operational pressures,” it said.
The commercial, minimum viable approach has been gathering pace at the New Zealand Defence Force in the last year.
The study said governments should invest in evaluation mechanisms for military AI, and strengthen that by clear thinking in the military about what they want the AI they buy to do, backed up with solid ways to assure commercial suppliers’ tech was set to meet political obligations.
Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.
– Published by EveningReport.nz and AsiaPacificReport.nz, see: MIL OSI in partnership with Radio New Zealand