Publication
Proposed changes to Alberta’s Freedom of Information and Protection of Privacy Act
Alberta is set to significantly change the privacy landscape for the public sector for the first time in 20 years.
Australia | Publication | October 2024
AI adoption must now be on every board room and executive team’s agenda as an emerging area of risk and regulatory focus. This issue has been given further impetus by a number of key developments across Australia’s multi-jurisdictional landscape.
Many organisations – in both public and private sectors – are not aware of the rising use of BYO AI in their businesses by their employees. Those aware of its probable use, or those who are actively embracing its use, are likely to have introduced AI policies or updated acceptable use of technology policies in an attempt to ensure appropriate AI use. Even for those that have given consideration to these issues, upcoming changes to privacy laws and the evolving regulator view of existing laws, require current approaches to be re-considered. The potential exposure of those that are not even at the ‘aware and consider’ level is increasing daily and rapidly.
Often new technologies, such as GenAI, can be difficult to categorise under existing laws. Further their risks, both at law and in business, evolve as ‘use cases’ are explored and sector specific nuances are encountered. In the case of AI, the specific terms and conditions of use of such tools can profoundly impact the compliance risks associated with their usage. So too whether they are ‘public’ versions where the AI algorithm is trained on a wide set of publicly available data, or ‘private’ AI where it is trained on data specific to an organisation. Risk profiles can also alter depending on the cloud or other data storage arrangements in place for deployment of AI tools (‘open’ or ‘closed’), the security of those arrangements, the location of the data centres where the data is ingested and the algorithms applied. Taken in combination, the result is invariably ‘untested’ areas of law, hence there is very good reason to look closely at what comes forth from relevant regulators.
In the context of GenAI, the ready availability of BYO AI on personal devices or worse, using personal accounts through corporate systems, is clearly a high priority risk. Not unexpectedly, some training and some policies are not going to satisfy the regulators in the face of disclosure or misuse of personal information by employees. Blocking, banning and monitoring might have their place, but if the use of an AI tool actually results in a breach, it is already clear where the regulators are going to look first. And when the tort for serious invasions of privacy is enlivened under the first tranche of reforms to the privacy laws, it will be the people impacted that will be looking for satisfaction.
Crucially, use of AI tools in the workplace is not completely untested.
In Victoria, guidance has emerged from an investigation by OVIC, representing one of the first findings into employee use of GenAI within a government agency. The investigation centred on the use of GenAI tools to draft a report in a particularly sensitive area. The report in question referred to various matters and people, including what is described as “delicate information” concerning those people, and was to be submitted to a court. Despite having been subject to internal review, the relevant report was only detected as potentially having involved the use of GenAI tools quite late in the piece.
Triggers for this detection included the use of alarming and inappropriate language, incomplete sentences and inclusion of inaccurate information - the dreaded ‘hallucinations’, where AI just makes things up - without regard to truth or accuracy.
Voluntary disclosure to OVIC following an internal investigation, required OVIC to consider the circumstances and potential breach of Victoria’s Information Privacy Principles (IPPs) under Victorian privacy law. Specifically, it considered whether there was a breach of an obligation to take reasonable steps to:
OVIC found that the employee’s use of GenAI tools to generate the report constituted a ‘collection’ of personal information, and that the use of the report within the government agency and to the court was a disclosure of personal information and so caught by IPP 3.1. OVIC found that notwithstanding the government agency’s internal policy on acceptable use of technology and staff training, to help comply with IPP 3.1, these were not sufficient to satisfy the ‘reasonable steps’ requirement to keep personal information accurate and up to date. It concluded that significantly more guidance was needed in this area.
OVIC also found that the employee’s inputting of personal information into GenAI tools was a failure by the government agency to take reasonable steps to protect personal information from unauthorised disclosure in breach of IPP 4.1. This was again due to a lack of guidance and specifically direct guidance on when it would be acceptable to use GenAI tools and what would be appropriate information to input and what the acute privacy risks were. Relevantly, OVIC also noted the absence of any technical means to verify if employees were using GenAI tools and that this made it impossible to detect actual instances of unauthorised disclosure to the owner of the GenAI tool. It also found that this lack of technical identification increased the likelihood of unauthorised disclosure.
In light of these findings and the similarity of privacy legislation across state, territory and federal levels, the time has come for boards and executive teams to delve more deeply into the AI-use in their organisations and how to manage its adoption against a backdrop of growing privacy scrutiny from regulators, the need for more jurisprudence, the likely rise of enforcement action and the negative reputational and financial harm it can produce if ignored. Even risk and compliance programs which were gold standard 12 months ago are likely to be already out of date given the quantum and speed of recent developments. Delaying action will only increase the gap and the pain.
The OAIC released two (non-binding) guidance documents on AI on 21 October 2024:
Both guidance documents include case studies and additional resources throughout including “Key Takeaway” documents and various “Checklists”.
The OAIC also made a clear recommendation that organisations do not enter personal information (and particularly sensitive information) into public GenAI tools.
The guidance focuses on privacy, however mentions the Voluntary AI Safety Standard produced by the National AI Centre/Department of Industry, Science and Resources.
Fundamentally, there is not a lot in the materials released by the OAIC which are not a commonsense layer on top of the regulatory framework. However, as the OVIC example shows, getting to a position where you can demonstrably and confidently discharge those obligations is going to be very challenging. In our view, the likely reality is that every organisation, whether consciously or surreptitiously (because of its employees’ use) will be using Gen AI in some form or another. This means that these new dimensions of risk already exist, and they call for active and conscious consideration and management.
Publication
Alberta is set to significantly change the privacy landscape for the public sector for the first time in 20 years.
Publication
On December 15, amendments to the Competition Act (Canada) (the Act) that were intended at least in part to target competitor property controls that restrict the use of commercial real estate – specifically exclusivity clauses and restrictive covenants – came into effect.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023