Source: Privacy Commissioner
Question
Answer
What is a non-OneDrive example of where content stores are risky?
Shared file servers, Dropbox, and email inboxes are all non-OneDrive examples. From a governance standpoint, personal OneDrive should be treated as temporary storage for drafts, not for long-term collaboration.
Wouldn’t AI assess value (also) based on date and on words like ‘draft’? Can it be told to e.g disregard a doc with ‘confidential’ in the title or filename?
Yes, AI can be trained to factor in metadata like document age or certain keywords. But this approach is limited and unreliable on its own. A much safer and more robust method is to apply sensitivity labels and metadata rules that formally control how content is handled. For example, Microsoft 365 tools allow you to restrict AI access based on classification, file type, or protection labels – making it much easier to enforce privacy at scale.
For your recruitment example, what about the situation where we ‘keep a CV on file for future opportunities’? Is that not really a realistic thing to do?
It’s a common practice, but it needs to be done with care. You should define a retention period (e.g. 12 months), communicate this to applicants, and allow them to request deletion after the recruitment process. Also consider legal hold requirements, in case the process is challenged. Ideally, this is built into your recruitment case file template with the default settings pre-applied but flexible for roles like a Chief Executive.
How does one get buy-in from leadership to prioritise these strategies?
Focus on risk. Identify the highest-risk content (e.g. HR, contracts, or customer data), quantify the potential fallout of a breach, and show how practical steps can reduce exposure. You could use this session’s video or invite an external review to present findings. Often, a short, high level assessment is enough to spark action, especially when linked to regulatory or reputational risk.
Is Teams not safe? Is SharePoint safer to collaborate internally with staff?
They work together. Teams stores files in SharePoint and OneDrive behind the scenes. Both can be made safe with the right setup: applying retention rules, sensitivity labels, metadata, and access controls. What matters is structure. For example, a recruitment team site can be tightly scoped with the right protections, so that only authorised people can access specific content and only for as long as it’s needed.
Love the approach to start with high-risk areas for labelling etc. HR, Legal – where else should we start?
Start with areas that handle high-stakes personal or sensitive data. This often includes customer service (names, addresses, complaints), regulatory consultations, and internal incident management. The key is to understand what information is created and used as part of your core business processes and to apply structured governance there first.
So AI can really access anything on OneDrive or Teams? Is this just within the organisation or external as well? Otherwise, why would anyone even use these platforms if they are so unsecure?
AI like Microsoft Copilot can only access what the individual user has permission to see – it doesn’t open up content to the outside world.But not all AI tools are created equal. If you’re using a third-party tool (like ChatGPT, Gemini, or Claude), and it’s trained on your inputs, there’s a much higher risk. Always confirm the scope, access, and data use policies of any AI platform you’re considering.
Do you think the large number of apps and programs teams (sometime multiple to communicate across) use is exposing organisations to greater risk?
Absolutely. Every new app increases your attack surface. But this isn’t just a Microsoft problem, the pre-Teams world was full of risky, unstructured tools too. The strength of Microsoft 365 lies in its potential to consolidate and govern information. The challenge is to use it well: with structured Teams templates, sensible defaults, and good training. Done right, it can significantly reduce risk.
Thanks Sarah. Do you do any other lectures or information sessions? It’s great to get this wide view and ideas about where to start and how to progress.
Yes! We have recorded sessions available on our website, and we’re running upcoming workshops (June–August) on managing “high-stakes content” – covering privacy, confidentiality, and governance in practice. Let us know if you’d like an invitation.
Thanks for the presentation Sarah. What is an IPC Workspace?
It depends. Privacy Officers bring the compliance lens. IT provides the tools. HR, Finance, or Operations may own the business processes. Often, the best results come from collaboration across roles – sometimes led by a CISO, or through a digital transformation project. We’re often asked to create a scoping report first – identifying key risks and recommending a practical, cross-functional way forward.
What do you think about using AI to help you to manage your content e.g., highlight risk, old info, differing information etc.
There’s real promise here, especially in auto classifying content or flagging risk patterns. But you need to ensure the AI only sees your data and doesn’t feed it back into public training sets. We’re working with AI to assist classification and retention. That said, good design still matters. When workspaces are built with clear rules and defaults, risk is reduced without relying solely on AI.
I also wonder about why we don’t explicitly reference commercial sensitivity in privacy conversations. Do these have different considerations?
It’s a great point. While commercial sensitivity isn’t covered under the Privacy Act, the governance techniques are the same: structured storage, restricted access, retention rules, and labelling. These protect business secrets just as effectively as personal information.
(Would) one of the risks for using AI would be misinformation and manipulation?
Definitely. Especially when AI pulls from poor-quality or untrusted sources – or if it mixes draft and final content. That’s why it’s critical to structure what AI can access and ensure human review remains part of the workflow. At this point in time, AI should be helpful, not authoritative.
Thanks Sarah, I was at the 7th Data conference, IM only got mentioned once when it came to AI… just the once, be good to get this message in front of that crowd if you can.
Agreed!!!