Publication
Generative AI: A global guide to key IP considerations
Artificial intelligence (AI) raises many intellectual property (IP) issues.
Australia | Publication | novembro 2024
There has been a firestorm of activity in the move to address the growing use of data and our dependence on it in the digital economy. The ability to use and move data is fundamental to the way we use technology, cloud services, our exposure to privacy invasions and our vulnerability to cyber attack. Large quantities of data for training algorithms is essential to generative AI1 technology, in order to break open new innovations, markets and potentially improved and healthier lives. Generative AI has brought with it the need to re-think existing laws and regulations.
While many jurisdictions have outpaced Australia in their race to govern the rise of generative AI in business and daily life, views about how tightly regulated AI should be have been mixed. Most, if not all generative AI products are still in their infancy with risks and benefits still emerging. Over-regulation may stifle their development and deter investment in the Australian market. Conversely, generative AI products may have already pervaded everyday life, warranting a more a pro-active and prescriptive approach to adequately mitigate and prevent untold harms.
In this paper we will consider the following:
Existing laws can and do apply to generative AI. AI use, development and deployment – and specifically the data used in that process - is governed by the Privacy Act 1988 (Cth)(Privacy Act), where personal information is involved. The Privacy Act is not an AI-specific regulation but governs how personal and sensitive information is collected, stored, used and protected by organisations. Any organisation which handles personal information, and satisfies criteria set out in the Act, is considered an Australian Privacy Principal entity and must comply with the Privacy Act (APP entities)2.
When personal information is used by AI developers and deployers, and they are an APP entity, they must comply with their obligations under the Privacy Act and the 13 Australian Privacy Principles contained therein (APPs). The Privacy Act does not stipulate how an organisation is to comply with the APPs, provides a principles-based regulatory regime. Although the Privacy Act has not been amended yet to include any AI-specific clauses, the Office of the Australian Information Commissioner (OAIC) has recently produced AI guidance documents in relation to the existing Privacy Act, and, at the state level, New South Wales has introduced a range of AI-guidelines which aim to provide a framework targeting government agencies in the building, testing and introduction of AI tools. We will look at the OAIC guidance later in this paper.
Comparatively, the European Union (EU) introduced the world’s first AI-specific legislation this year which operates on a tiered system by defining four levels of risk for AI systems ranging from minimal risk to unacceptable risk. This intends to prevent blanket regulation of all AI by recognising the varying levels of harm associated with different products. Australia’s proposed guardrails for AI in high-risk settings appear to follow the EU’s lead in adopting a risk-based approach to regulation in an effort to balance the development of safe and beneficial AI.
Not all jurisdictions have taken a regulatory approach to managing the risks of generative AI. Singapore does not have any specific rules or legislation relating to AI, instead it has opted to balance the innovation of AI and protecting the community from possible harms from AI through a series of frameworks and toolkits to guide organisations to align their AI practices with model frameworks.
In terms of AI-specific initiatives, Australia has a voluntary ethical framework; the Australian AI Ethics Principles, that organisations may implement as best practice in the development and implementation of AI. However, compliance is not mandatory. There have been a number of other significant recent developments.
Reflecting a strong appetite for AI-specific regulation to better manage unique and emerging AI-associated risks, on 5 September 2024, the Australian federal government released two key papers:
The standard provides practical guidance to all Australian organisations on how to use, develop and innovate safely and responsibly with AI. The standard is intended to set a universal benchmark and comprise of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. The standard adopts a ‘human-centred approach’ to AI development and deployment, as well as including a definition of ‘bias’ which is identified as a specific risk to be managed.3 While the guardrails are currently voluntary, making processes compliant with best practice is one way to minimise other legal risks that may arise from the use of AI.
The Australian government is considering introducing mandatory guardrails to complement the existing voluntary guardrails, to be followed in high-risk settings to apply both AI developers and deployers. The government has proposed three options for introducing the mandatory guardrails which are:
There are pros and cons to both utilising a tight legislative framework and pursuing a voluntary form of responsible AI governance. There is increasing momentum to introduce legislation that has been developed specifically to manage concerns of AI ethics, however as the government acknowledged in its January 2024 interim report, there is a need to balance allowing AI development to flourish whilst ensuring there are necessary protections for high-risk settings.
The various approaches to regulating AI extend to how businesses are currently deploying AI internally. Given the rapid pace at which AI is being incorporated into daily work life, businesses risk over relying on a singular provider of certain AI products. Businesses have been vocal on both ends of the spectrum, with AI sceptics challenging if the enormous investment AI requires balances the return or whether it can substitute the unique skill set and insurances provided by many industries (i.e. lawyers coming with client legal privilege or doctors with patient confidentiality), it is uncertain how integrating AI in some aspects of these markets may have unforeseen consequences. Conversely, businesses are championing the ‘AI boom’ and embracing the potential benefits in productivity AI can offer as a method to complement and enhance current skills as well as welcoming new skill sets of employees who can utilise AI to its full potential.
Ultimately, AI-incorporated product offerings are still in market reality testing stages and are vulnerable to being removed from circulation if they are unsuccessful. Should this happen, it is crucial that businesses have a contingency plan to continue their operations without major disruptions. Whilst there are many benefits that AI-incorporated services have unlocked and will continue to do so, it is vital to remember that AI products and suppliers should not be detrimental to your business if they are removed. If you have or are planning to deploy novel AI products in your business, it is important to understand the true utility you anticipate AI will have and be prepared to pivot if the efficiency is not as expected.
Likewise, should the guardrails in high-risk settings become mandatory, businesses should be prepared to adapt their policies to align with the guardrails and consider the guardrails in any procurement or introduction of AI products. The consultation paper for the proposed mandatory guardrails is clear in its intention to introduce a preventative regulatory structure by vetting AI at the beginning of the AI lifecycle and beyond. This may have downstream consequences on organisations who have positioned themselves to adopt developing AI should the proposed guardrails prevent or delay the release of AI products to market.
At a time of wide-sweeping digital and cyber reform, on 12 September 2024 two highly anticipated Bills were introduced to Parliament:
The Privacy Bill was presented as the ‘first tranche’ of reforms, indicating other proposed changes that the Government agreed or agreed in principle to, in its response to the four-year review by the Attorney-General’s Department of the Privacy Act, may follow in future Bills. Recently, the consultation period for the anticipated ‘second tranche’ of Privacy reforms was indicated by the Attorney-General to be released in a draft exposure Bill prior to Christmas. It is predicted that the second tranche may contain crucial reforms including removing the small business exemption from the Privacy Act and introducing a ‘fair and reasonable’ test to be applied in handling personal information.
The key changes proposed by the Privacy Bill and Misinformation Bill are discussed in our article here. One of the most significant changes in the Privacy Bill is the inclusion of a new tort for serious invasions of privacy which will afford individuals a right to sue in certain circumstances.
On 9 October 2024, the Government introduced a package of legislation in support of its 2023-2030 Cyber Security Strategy:
The Cyber Security Bill 2024 if passed would result in Australia’s first standalone Cyber Security Act. The underlying intention of the Bill is to introduce a framework to manage and respond to cyber security risks, enhance protections and increase the Government's visibility of cyber threats and the landscape they must be prepared for.
The introduction of the Cyber Security Bill signals a response to the outcry for appropriate legislation to manage the seemingly increasing cyber threats and attacks. With the recent onslaught of high-profile cyber breaches, as well as civil penalty action being pursued by the OAIC as a result, the Cyber Incident Review Board will provide guidance on what protections are needed to prevent breaches. However, the Bill does not consider the repercussions of complex contracting obligations that may intersect with the limited use obligation thereby circumventing affording protection to certain organisations.
The OAIC made the surprising decision to release new guidelines on privacy and the use/development of AI indicating their clear intention to set minimum standards on how personal information should be handled in AI. The OAIC separated the guidelines into two documents:
The guidance provided on privacy and the use of commercially available AI products emphasises the importance of businesses completing detailed due diligence when adopting generative AI products and adopting a ‘privacy by design’ approach. In practice, implementing a privacy by design approach includes updating internal policies such as privacy policies to ensure privacy collection notices are accurate (i.e. include chatbots if used) and continuously monitor AI products used as they are updated or evolve.
The guidelines are intended to assist organisations implement and use their AI systems in accordance with existing AI frameworks such as the Voluntary AI Safety Standard rather than create additional legal obligations. The guidelines provide questions to consider prior to adopting commercially available AI products such as whether you understand how the product works, whether it is appropriate for your intended use, what security risks does it potentially introduce, and who will access data within the AI product. It reminds businesses to consider their compliance with APP 3 surrounding the collection of personal information by AI, APP 6 relating to the use and disclosure of personal information collected by AI and APP 10 regarding the accuracy of information provided by AI. The OAIC makes clear that it does not recommend inputting personal information into public generative AI tools.
These guidelines are skewed towards ‘developers’ of AI models and applications. A key message of the guidelines is that publicly available information is not mutually exclusive to personal information and should not be used to ‘train’ generative AI models without appropriate privacy notices being issued. This is the key thread of the guidelines as it reminds developers to keep their privacy obligations forefront of mind during the development process, particularly given the difficulty of ‘erasing’ learnt information from certain AI products. Like the guidance on commercially available AI, it reminds developers of implementing ‘privacy by design’ by implementing practical tips such as excluding social media profiles from training material, considering APP 3 obligations regarding data scrapping as well as use and disclosure obligations.
The clear message to organisations is to act now and ensure that you are compliant with voluntary guidelines or standards to place you in the best position when new regulations come in, as it does not seem to be question of if, but rather when.
There are several ways to define generative AI. The OECD definition of an Artificial Intelligence (AI) System is commonly adopted, which is defined as “a machine-based system that can, for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments”, OECD, “Recommendations on the Council on Artificial Intelligence” OECD/Legal/0449.
An APP entity is defined as any agency or organisation under s6(1) of the Privacy Act. This currently excludes small business operators with an annual turnover of less than $3 million, registered political parties or a state or territory authority. A small business operator will be deemed to be an APP entity under certain exceptions such as where they provide a health service or otherwise hold health information.
Department of Industry, Science and Resources, Voluntary AI Safety Standard, “bias” is defined as the systematic difference in treatment of certain objects, people or groups in comparison to others.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023