AI, privacy and confidentiality
Many organisations have raised concerns regarding the relationship between AI (particularly public AI), and personal and/or confidential data.
Anyone using public AI, such ChatGPT, could potentially breach privacy and/or confidentiality obligations if the information provided to the software is not appropriately vetted and managed.
How AI uses content
AI algorithms are typically trained using large datasets, which often include user-generated content and inputs (e.g. reviews, comments, and conversations).
Once this information is submitted to the AI application, it can then be used as training material for updated versions of the software. It can also be used to communicate with users and conduct research.
For example, ChatGPT's Privacy Policy states:
"When you use our Services, we may collect Personal Information that is included in the input, file uploads, or feedback that you provide to our Services ("Content")……..we may use Content you provide us to improve our Services, for example to train the models that power ChatGPT…."
If you're using ChatGPT, be aware that any information you enter may be used for purposes other than simply responding to your specific query or request.
What are the privacy and confidentiality considerations when using AI?
Privacy Act
Any organisation in New Zealand that collects or holds personal information is required to comply with the Privacy Act. Personal information is identifiable details about an individual. The definition is very broad and includes information such as names, addresses, phone numbers, dates of birth, passport/driver's licence numbers, bank account details etc, but it is not limited to sensitive or private information. Basically, it is any information that tells us something about a specific individual and can be contained in any form including voice recordings and pictures.
The Privacy Act contains 13 Information Privacy Principles (IPPs) that govern the way personal information is to be handled.
The input of personal information into public AI can raise issues under some of the IPPs. For example:
- IPP 10 – Use: Personal information obtained in connection with one purpose cannot be used for another purpose except in certain circumstances (such as where this is authorised by the individual concerned). Many organisations are currently considering how they can harness the power of AI within their business. In some cases this could involve using existing data for new purposes which were not contemplated at the time the data was collected. Any new use of personal information in conjunction with AI could breach the Act if this is not covered by one of the IPP 10 exceptions.
- IPPs 11 and 12 - Disclosure: Personal information cannot be disclosed to third parties except in certain circumstances. Additional requirements apply if the disclosure is to an offshore recipient. If the AI provider can use inputted personal information for its own purposes (for example, to train the AI), then the input will be treated as a disclosure under the Act and the organisation which provided the information will need to have valid grounds for making the disclosure – for instance, by having the necessary level of authorisation from the individual concerned.
Confidentiality
Organisations also need to take care to ensure that their input of information into public AI software does not breach any confidentiality obligations owed to third parties (for example, confidentiality obligations under commercial contracts). Whether this would result in a breach may depend on the scope of the confidentiality obligations and the extent to which the AI provider will use the information. These matters can obviously differ depending on the source of the particular confidential information and the particular AI provider involved.
How to avoid privacy and confidentiality breaches
In June the Office of the New Zealand Privacy Commissioner outlined some expectations for agencies using generative AI:
- Senior leadership approval should be obtained before generative AI may be used.
- A Privacy Impact Assessment should be conducted before using generative AI to assist in identifying and mitigating privacy and wider risks.
- If generative AI will be used in a manner where an individual's personal information may be impacted, they should be told how, why, and the potential risks of such use in plain language.
- Personal information and confidential information should not be inputted into generative AI even for training purposes, unless the generative AI provider expressly states inputted information is not retained or disclosed. As outlined earlier, we know common generative AI platforms such as ChatGPT currently do retain and use information.
- Other expectations include: engaging with Māori regarding the impacts of AI on their communities, developing procedures surrounding accuracy and access, and ensuring that human review of AI generated output is undertaken.
As a general rule and at this stage, we would recommend that no personal or confidential information be uploaded to any public AI (at least not without having first undertaken a detailed review of the AI and information to confirm that no privacy or confidentiality obligations would be breached). Any information should be carefully checked and altered (so that it ceases to be personal or confidential) before entry.
What lies ahead for AI providers and privacy?
AI providers have recognised that privacy concerns are a fundamental issue for professional service providers and constitute a significant barrier in getting organisations to fully embrace the use of AI in the workplace.
In late April, OpenAI introduced a new privacy feature for ChatGPT. Users now have the ability to 'opt out' of having their content used to train the AI. Conversations will be retained for 30 days and monitored only for abuse, before being permanently deleted. OpenAI is also developing a new subscription service for businesses, 'ChatGPT Business', where data inputs will not be used to train the models by default. The offering is described as being “for professionals who need more control over their data as well as enterprises seeking to manage their end users”.
Although at this stage it is prudent to draw a hard line against the input of personal and confidential information into public AI, as the technology and security assurances from AI providers continue to develop, this may be something that can be revisited in the future.
In the meantime, organisations will likely want to focus on:
- How they can use public AI safely without sharing any personal or confidential information.
- Opportunities to adopt or implement private AI, where the use of the information is restricted to the organisation itself and the AI provider simply processes the information on the organisation's behalf,
If you have concerns about privacy and confidentiality risks for you or your organisation when using AI for business, get in touch with one of our experts for advice.