Source: Amnesty International Aotearoa New Zealand
The Education and Workforce Select Committee has reported back on its inquiry into the harm young New Zealanders encounter online saying the current law is not adequate. It makes a range of recommendations including:
- Strengthen liability for online harm, e.g. for platform design, such as use of algorithms and infinite scroll features
- Establish an independent national regulator for online safety – the report states that effective regulatory change cannot be accomplished without an empowered regulator
- Regulate algorithmic recommendation systems
- Mandate algorithm transparency
“The Committee’s report strongly affirms that online harm is an urgent issue, that legal safeguards are needed targeting platform accountability and transparency, and the need for an independent regulator.
“The rise of the internet has opened up incredible possibilities. However, without proper regulations, we’ve witnessed the growth of digital platforms that can create harmful online environments impacting all of society, not just young people. From death threats, revenge porn, live-streamed terrorism, to complex financial scams, the harm is profound. But it doesn’t have to be this way.
“The Committee’s report is clear, we can better protect all New Zealanders through such measures as transparency and accountability, overseen by an independent regulator.
“Search engines and social media platforms have been designed to promote content that drives engagement, regardless of its harmful effects. Therefore we would also like to see a duty of care introduced where companies must actively assess and mitigate risks with the aim of making online platforms safer by design. An approach countries like Australia and the UK, and the European Union are already doing,” says Anjum Rahman from the Tāhono Trust.
“We know the Government is considering the issue of online harm, but it shouldn’t only focus on a social media ban for young people. While this was one of the Committee’s recommendations, the report was clear more is needed. Banning social media for young people doesn’t address the root causes of harm and places the burden of safety on young people and parents while allowing platforms to continue operating predatory business models. In addition, we’re very concerned that such a policy would mean people have to give away identity data, including biometrics. This in turn raises serious privacy questions about what happens with this data.
“Any plan that solely puts the burden on parents and young people while leaving the toxic architecture of these platforms untouched, will have failed so many New Zealanders,” says Lisa Woods from Amnesty International Aotearoa New Zealand.
Notably, the InternetNZ Insights Report explored people’s thoughts about AI – a feature of many online platforms. It was reported that 68% of people are concerned that AI is being used to produce harmful content with 65% concerned it’s being used for malicious purposes. 64% think there is insufficient regulation and law governing the development of AI.
“We need to create proper safeguards – pragmatic and effective law that upholds human rights, including free speech. Importantly in doing so the Government must keep at the forefront its obligations under Te Tiriti o Waitangi and work with Māori to develop appropriate regulation,” says Woods.