Source: Radio New Zealand
By; Chris McGavin*, Dr Andrew Lensen* and Dr Cassandra Mudgway*
Opinion: Nearly a year has passed since the Government released their so-called AI Strategy.
Nine months since we, along with other New Zealand AI Experts, penned an open letter to the Government calling for AI regulation and the establishment of some sort of responsible AI entity.
In that time, both a lot, and nothing has changed.
The world is still being bludgeoned by a maelstrom polycrisis. Youth unemployment, serious concerns about large scale job displacement, a global economy on a knife’s edge, and schools being mistakenly bombed are all top of mind.
These issues are increasingly linked to artificial intelligence (AI) and, whether we like it or not, it is here to stay.
For us to prosper in this ‘AI age’ we will have to, at some point, disregard the hype-cycle and engage with AI’s many unpleasantries.
In the last year alone we’ve seen instances of teenagers being encouraged to commit suicide, umpteen examples of chatbot related delusion and psychosis, chatbots assisting researchers to plan mass killings, and an almost incomprehensible increase in the amount of Child Sexual Abuse Material (CSAM) and non-consensual sexualised images.
In fact, there is so much AI harm, that there are several resources devoted to attempting to track it.
These are human rights issues, engaging rights to privacy, freedom from discrimination and sexual exploitation, equality and dignity, and without confronting them we risk sleep-walking into a crisis of our own making.
Despite the thundering of the outside world, New Zealand’s response to these issues is largely non-existent.
The Government’s leadership on AI is lacklustre, and ignores anything that doesn’t include the words ‘productivity’ or ‘efficiency’.
The latest missed opportunity was the decision to not send an observer to this year’s Responsible AI in the Military Domain Summit.
We’ve had a presence there previously, and doing so again would have helped to maintain our reputation as a strong moral leader internationally.
It is not just the Coalition who seem flummoxed.
The majority of political parties do not appear to have any appetite for leadership when it comes to AI. In fact, of all the political parties we contacted, only The Green Party signed the letter and signaled their willingness to take part in cross-party work on responsible AI and AI regulation.
Yet, many of the parties are using AI.
Adorning their campaigns with a variety of AI generated slop to varying degrees of controversy.
Or, in their individual capacities for research which seems an odd task for a tool that hallucinates a significant amount of the time, and reduces your inclination to critically evaluate
its outputs.
The lack of engagement on AI harm is surprising.
Especially given it is an election year. The public is very clearly anxious about AI, and there is good data to back this up.
For instance, only 44 percent of New Zealanders believe the benefits of AI outweigh the risks. Only 34 percent are willing to trust AI. 52 percent are very concerned about AIs impact on society, and a whopping 81 percent of New Zealanders believe AI regulation is required.
As a nation, we are already failing to address the near-term impacts of AI.
Worse still, we have yet to even consider how we might tackle its long term impacts, such as worker displacement (both entry level and later-career) and other AI safety risks.
It goes without saying that in the absence of any political will or impetus none of this will change.
The unfortunate truth is large technology companies do not care about New Zealand. They will not, of their own volition, do anything to ensure that New Zealanders remain safe from AI harms.
It is not their prerogative; their sole goal is wealth extraction. They will bend over backwards to distance themselves from any potential wrongdoings, as they have always done.
In place of accountability, they will promise us the world: for example, a $102 billion dollar boon for our economy if we use their tools. This promise, of course, ignores the fact that their AI tools have failed to live up to the hype.
The vast majority of organisations are not seeing any return on investment from AI. Our public sector reports much the same: they are not getting a return from AI, and most of their proofs of concepts are not working.
We cannot expect technology companies to provide safer AI which uphold human rights.
We have seen other countries like Australia succeed in pushing back. We can and should expect our leaders to do the same.
This election year, we sincerely hope they do. It is vital that you, the voters, consider their policies on AI when casting your ballot. They will listen if you demand it.
*Chris McGavinis director at LensenMcGavin AI
*Dr Andrew Lensen, Victoria University of Wellington, LensenMcGavinAI
*Dr Cassandra Mudgway, University of Canterbury
Sign up for Ngā Pitopito Kōrero, a daily newsletter curated by our editors and delivered straight to your inbox every weekday.
– Published by EveningReport.nz and AsiaPacificReport.nz, see: MIL OSI in partnership with Radio New Zealand
