Recommended Sponsor Painted-Moon.com - Buy Original Artwork Directly from the Artist

Source: Privacy Commissioner

This page sets out the expectations of the Office of Privacy Commissioner with respect to the use of generative Artificial Intelligence (AI) by agencies. The evolving nature of generative AI is such that the material on this page is subject to change as we continue to review the situation. It was updated on 25 May 2023.

Generative AI are tools and apps that have recently been prominent in the media. These tools use large amounts of information (including personal information) to transform and generate a variety of content, including human-like conversations, writing essays, creating images or videos, and computer code.

Prominent generative AI tools include:

  • OpenAI’s ‘ChatGPT’, which is an app leveraging their GPT-3 / GPT-4 large language models
  • Microsoft’s Bing search or Copilot products, which leverages GPT-4, and
  • Google’s Bard.

Generative AI tools, capabilities, and their impact are rapidly evolving. Regulators across the world are actively reviewing the situation, and the Privacy Commissioner has called for New Zealand regulators to come together to determine how best to protect the rights of New Zealanders.

Potential privacy risks associated with generative AI tools

Agencies considering whether to use a generative AI tool need to be aware of potential privacy risks that have been associated with these tools. These risks include:

  • The training data used by the generative AI
    Generative AI models are trained on vast amounts of information, some of which is personal information. This presents various privacy risks, including around how personal information has been collected, whether there is sufficient transparency and whether the information is accurate or contains bias.

 We would strongly caution against using sensitive or confidential data for training purposes.

  • Confidentiality of information your agency will enter into a generative AI
    Generative AI tools require a prompt for them to undertake their activities. This prompt could be a few words or a large amount of data and could include personal and confidential business information. There is a risk that personal information entered into the generative AI is retained or disclosed by the provider of the generative AI tool and used to continue training the model.
  • Accuracy of information created by the generative AI
    Generative AI tools often produce very confident errors of fact or logic and can perpetuate bias and discrimination. Do not rely on the output of generative AI tools without first taking appropriate steps to fact check and ensure the accuracy of the output.
  • Access and correction to personal information
    The Privacy Act provides individuals with a right to access and correct personal information held by an agency. Generative AI tools may not always be compatible with such rights. 

OPC position on generative AI tools

The responsibility of complying with the requirements of the Privacy Act lies with agencies (whether in the public, private, or not-for-profit sectors). The Privacy Act is technology-neutral and takes a principle-based approach, meaning the same privacy rights and protections apply to generative AI tools that apply to other activities that use personal information (such as collecting and using personal information via paper or computer).

The Office of the Privacy Commissioner expects that agencies considering implementing a generative AI tool will:

  1. Have senior leadership approval
    Ensure that senior leadership has given full consideration of the risks and mitigations of adopting a generative AI tool and explicitly approved its use.
  2. Review whether a generative AI tool is necessary and proportionate
    Given the potential privacy implications, review whether it is necessary and proportionate to use a generative AI tool or whether an alternative approach could be taken.
  3. Conduct a Privacy Impact Assessment
    Only use a generative AI tool after conducting a Privacy Impact Assessment (and/or Algorithmic Impact Assessment) to help identify and mitigate privacy and wider risks. This should include seeking feedback from impacted communities and groups, including Māori. Confirm if the provider of the generative AI tool has published any information about how privacy has been designed into the tool and undertake due diligence on the potential risks of using the generative AI tool.
  4. Be transparent
    If the generative AI tool will be used in a way likely to impact customers and clients and their personal information, they must be told how, when, and why the generative AI tool is being used and how potential privacy risks are being addressed. This must be explained in plain language so that people understand the potential impacts for them before any information is collected from them. Particular care must be taken with children.
  5. Develop procedures about accuracy and access by individuals
    If the generative AI tool will involve the collection of personal information of customers or clients, develop procedures for how your agency will take reasonable steps to ensure that the information is accurate before use or disclosure, and for responding to requests from individuals to access and correct their personal information.
  6. Ensure human review prior to acting
    Having a human review the outputs of a generative AI tool prior to your agency taking any action because of that output can help mitigate the risk of acting on the basis of inaccurate or biased information. Review of output data should also assess the risk of re-identification of inputted information.
  7. Ensure that personal or confidential information is not retained or disclosed by the generative AI tool
    Do not input into a generative AI tool personal or confidential information, unless it has been explicitly confirmed that inputted information is not retained or disclosed by the tool provider. An alternative could be stripping input data of any information that enables re-identification. We would strongly caution against using sensitive or confidential data for training purposes

We recognise that generative AI is an evolving technology and will continue monitoring its developments in this area. Further updates to this guidance will be provided soon.

Making a privacy complaint about generative AI tools
Generative AI is covered by the Privacy Act 2020, which means that New Zealanders can complain to us if they think their privacy has been breached.

Organisations reporting a privacy breach  use the NotifyUs tool here

Individuals wanted to make a complaint use the complaint form here 

New Zealand resources relating to Artificial Intelligence

International resources

MIL OSI