Publication
AI-adoption in Australia under the privacy spotlight
Australia | Publication | oktober 2024
AI-adoption must now be on every board room and executive team’s agenda as an emerging area of risk and regulatory focus. This issue has been given further impetus by a number of key developments across Australia’s multi-jurisdictional landscape. At the federal level, the Office of the Australian Information Commissioner (OAIC) has released AI-specific guidance and one out of two tranches of sweeping and substantive privacy reform amending the Privacy Act 1988 (Cth) (Privacy Act) and related legislation, has been tabled. At the state level, and perhaps most significantly of all for horizon-watchers, there is the reasoning and findings of a ground-breaking investigation by the Office of the Victorian Information Commissioner (OVIC) into a government employee’s use of generative AI (GenAI) tools to produce a sensitive report. OVIC’s report represents the first findings by an Australian regulator about what constitutes ‘reasonable steps’ when it comes to the intersection of AI tools and personal information.
Turning a blind eye, no longer an option
Many organisations – in both public and private sectors – are not aware of the rising use of BYO AI in their businesses by their employees. Those aware of its probable use, or those who are actively embracing its use, are likely to have introduced AI policies or updated acceptable use of technology policies in an attempt to ensure appropriate AI use. Even for those that have given consideration to these issues, upcoming changes to privacy laws and the evolving regulator view of existing laws, require current approaches to be re-considered. The potential exposure of those that are not even at the ‘aware and consider’ level is increasing daily and rapidly.
Often new technologies, such as GenAI, can be difficult to categorise under existing laws. Further their risks, both at law and in business, evolve as ‘use cases’ are explored and sector specific nuances are encountered. In the case of AI, the specific terms and conditions of use of such tools can profoundly impact the compliance risks associated with their usage. So too whether they are ‘public’ versions where the AI algorithm is trained on a wide set of publicly available data, or ‘private’ AI where it is trained on data specific to an organisation. Risk profiles can also alter depending on the cloud or other data storage arrangements in place for deployment of AI tools (‘open’ or ‘closed’), the security of those arrangements, the location of the data centres where the data is ingested and the algorithms applied. Taken in combination, the result is invariably ‘untested’ areas of law, hence there is very good reason to look closely at what comes forth from relevant regulators.
In the context of GenAI, the ready availability of BYO AI on personal devices or worse, using personal accounts through corporate systems, is clearly a high priority risk. Not unexpectedly, some training and some policies are not going to satisfy the regulators in the face of disclosure or misuse of personal information by employees. Blocking, banning and monitoring might have their place, but if the use of an AI tool actually results in a breach, it is already clear where the regulators are going to look first. And when the tort for serious invasions of privacy is enlivened under the first tranche of reforms to the privacy laws, it will be the people impacted that will be looking for satisfaction.
Crucially, use of AI tools in the workplace is not completely untested.
First regulatory guidance on AI ‘disclosures’ and ‘reasonable steps’
In Victoria, guidance has emerged from an investigation by OVIC, representing one of the first findings into employee use of GenAI within a government agency. The investigation centred on the use of GenAI tools to draft a report in a particularly sensitive area. The report in question referred to various matters and people, including what is described as “delicate information” concerning those people, and was to be submitted to a court. Despite having been subject to internal review, the relevant report was only detected as potentially having involved the use of GenAI tools quite late in the piece.
Triggers for this detection included the use of alarming and inappropriate language, incomplete sentences and inclusion of inaccurate information - the dreaded ‘hallucinations’, where AI just makes things up - without regard to truth or accuracy.
Voluntary disclosure to OVIC following an internal investigation, required OVIC to consider the circumstances and potential breach of Victoria’s Information Privacy Principles (IPPs) under Victorian privacy law. Specifically, it considered whether there was a breach of an obligation to take reasonable steps to:
- Ensure personal information collected, used or disclosed is accurate, complete and up to date (IPP 3.1).
- Protect the personal information held from misuse and loss, and unauthorised access, modification or disclosure (IPP 4.1).
A disclosure to the court and to the technology vendor
OVIC found that the employee’s use of GenAI tools to generate the report constituted a ‘collection’ of personal information, and that the use of the report within the government agency and to the court was a disclosure of personal information and so caught by IPP 3.1. OVIC found that notwithstanding the government agency’s internal policy on acceptable use of technology and staff training, to help comply with IPP 3.1, these were not sufficient to satisfy the ‘reasonable steps’ requirement to keep personal information accurate and up to date. It concluded that significantly more guidance was needed in this area.
Inputting someone’s personal information is not taking reasonable steps
OVIC also found that the employee’s inputting of personal information into GenAI tools was a failure by the government agency to take reasonable steps to protect personal information from unauthorised disclosure in breach of IPP 4.1. This was again due to a lack of guidance and specifically direct guidance on when it would be acceptable to use GenAI tools and what would be appropriate information to input and what the acute privacy risks were. Relevantly, OVIC also noted the absence of any technical means to verify if employees were using GenAI tools and that this made it impossible to detect actual instances of unauthorised disclosure to the owner of the GenAI tool. It also found that this lack of technical identification increased the likelihood of unauthorised disclosure.
In light of these findings and the similarity of privacy legislation across state, territory and federal levels, the time has come for boards and executive teams to delve more deeply into the AI-use in their organisations and how to manage its adoption against a backdrop of growing privacy scrutiny from regulators, the need for more jurisprudence, the likely rise of enforcement action and the negative reputational and financial harm it can produce if ignored. Even risk and compliance programs which were gold standard 12 months ago are likely to be already out of date given the quantum and speed of recent developments. Delaying action will only increase the gap and the pain.
OAIC guidance
The OAIC released two (non-binding) guidance documents on AI on 21 October 2024:
- Guidance on privacy and the use of commercially available AI products
- Guidance on privacy and developing and training generative AI models
Both guidance documents include case studies and additional resources throughout including “Key Takeaway” documents and various “Checklists”.
Some high-level observations include the following.
Privacy and the use of commercially available AI products - guidance highlights
- Organisations need to do due diligence when adopting GenAI products, and embed a governance approach to privacy (privacy by design). This includes:
- Updating internal policies.
- Updating privacy policies and privacy collection notices – for example, chatbots were called out as something that needs to be clearly identified to external users.
- Undertaking privacy impact assessments (PIAs),
- Regular review and monitoring of generative AI products (i.e. it is not a “one and done” exercise).
- Specifically, compliance with the following APPs needs to be considered in that due diligence process:
- APP 3 around the collection of personal information by AI.
- APP 6 in relation to the use and disclosure of personal information collected by AI.
- APP 10 around the accuracy of information. The OAIC provides practical tips such as using disclaimers or watermarks for information that is produced with AI.
The OAIC also made a clear recommendation that organisations do not enter personal information (and particularly sensitive information) into public GenAI tools.
The guidance focuses on privacy, however mentions the Voluntary AI Safety Standard produced by the National AI Centre/Department of Industry, Science and Resources.
Privacy and developing and training generative AI models – guidance highlights
- This guidance focusses on “developers”, being those that design, build, train, adapt or combine AI models and applications.
- The guidance reiterates that publicly available information does not mean the information is not personal information and therefore it should not be used to train or fine-tune GenAI models without first ensuring privacy compliance.
- In developing AI models, a cautious approach is needed using governance/privacy by design and taking extra care around collection, accuracy and sensitive information. Examples of relevant considerations and guidance include:
- ‘Practical tips’ on minimising personal information when training a GenAI model, such as excluding certain sources like public social media profiles.
- APP 3 and data scraping.
- Third party data sets.
- Notice and transparency obligations under APP 1 and APP 5.
- The risk of data re-identification.
Fundamentally, there is not a lot in the materials released by the OAIC which are not a commonsense layer on top of the regulatory framework. However, as the OVIC example shows, getting to a position where you can demonstrably and confidently discharge those obligations is going to be very challenging. In our view, the likely reality is that every organisation, whether consciously or surreptitiously (because of its employees’ use) will be using Gen AI in some form or another. This means that these new dimensions of risk already exist, and they call for active and conscious consideration and management.
Recent publications
Publication
The 2025 Dutch tax plan: Impact on real estate sector
On 17 September 2024, the Dutch Ministry of Finance published its 2025 Tax Plan (Belastingplan 2024). The plan contains several proposals that affect the Dutch real estate sector.
Publication
The 2025 Dutch tax plan: Impact on businesses
Today, 17 September 2024, the Dutch Ministry of Finance published its 2025 Tax Plan (Belastingplan 2025). The plan contains several proposals that affect businesses operating in or with the Netherlands. Most provisions of the 2025 Tax Plan will enter into force on 1 January 2025 (unless otherwise indicated).
Subscribe and stay up to date with the latest legal news, information and events . . .