Publication
Proposed changes to Alberta’s Freedom of Information and Protection of Privacy Act
Alberta is set to significantly change the privacy landscape for the public sector for the first time in 20 years.
Canada | Publication | October 24, 2023
On September 27, the Minister of Innovation, Science and Industry released a voluntary code of conduct specific to generative AI. This GenAI code follows the proposed Artificial Intelligence and Data Act (AIDA), which was introduced as part of Bill C-27 in June 2022 but will not likely be in force until 2025.
Beyond risk mitigation, the Code of Conduct encourages its signatories to promote and build a robust and responsible AI ecosystem in Canada. The code provides a set of identified measures that support upcoming regulation pursuant to AIDA. These measures emphasize developing (including methodology selection, collection and processing of datasets, model building, and testing) and managing the operations (including putting a system into operation, controlling the parameters of its operation, controlling access, and monitoring its operation) of generative AI systems.
Organizations developing and managing the operations of these systems will implement responsible generative AI practices to mitigate adverse impacts associated with advanced generative AI systems.
The Code of Conduct establishes important benchmarks as undertakings for organizations. These undertakings can be set into two tiers based on their application:
(1) all advanced generative systems, and
(2) advanced generative systems that are available for public use.
Each tier measures six core principles applicable to all organizations: accountability, safety risk assessments, fairness and equity considerations, transparency, human oversight and monitoring, and maintaining system validity and robustness. Tier two extends these measures to organizations making these systems broadly available for use, acknowledging the heightened risks associated with public use. This update draws a comparison between what developers and managers have to do when the generative AI system is for general use versus public use.
Regardless of your organization’s role and whether the advanced generative system is for public use or not, certain steps must be taken to align with the Code of Conduct. This includes implementing risk management policies, procedures and training, proportionate to the nature and risk profile of the activities. You will also need to disseminate best practices with organizations playing complementary roles in the ecosystem. You also must perform comprehensive assessments of reasonably foreseeable risks along with mitigation measures.
For a further division of requirements, see below:
Principle | Measures | All advanced generative systems | Advanced generative systems available for public use | ||
Developers | Managers | Developers | Managers | ||
Accountability | Establish a risk management framework that aligns with the nature and risk level associated with the activities in question. | ✔ | ✔ | ✔ | ✔ |
Disseminate best practices with other organizations in the ecosystem. | ✔ | ✔ | ✔ | ✔ | |
Assess the system through multiple sources, including third-party audits, before releasing it. | ✖ | ✖ | ✔ | ✖ | |
Safety | Conduct an assessment to identify reasonably foreseeable potential adverse impacts, including risks associated with inappropriate or malicious use of the system. | ✔ | ✔ | ✔ | ✔ |
Mitigate risk by implementing proportionate measures to safeguard against misuse. | ✔ | ✖ | ✔ | ✖ | |
Provide downstream developers and managers with guidance on proper system usage, including details on the measures taken to mitigate risks. | ✔ | ✖ | ✔ | ✖ | |
Fairness and Equity | Evaluate datasets utilized for training to ensure data quality and minimize potential biases. | ✔ | ✖ | ✔ | ✖ |
Before releasing, use a variety of testing methods and measures to evaluate and reduce the risk of biased output. | ✔ | ✖ | ✔ | ✖ | |
Transparency | Publish information on capabilities and limitations of the system. | ✖ | ✖ | ✔ | ✖ |
Create and execute a dependable and publicly accessible technique to identify media content produced by the system, with an emphasis on audio-visual content (e.g., watermarking). | ✖ | ✖ | ✔ | ✖ | |
Provide a detailed account of the training data categories utilized in developing the system, along with the steps taken to identify and mitigate risks. | ✖ | ✖ | ✔ | ✖ | |
Clearly identify AI systems that could be mistaken for humans as AI systems. | ✖ | ✔ | ✖ | ✔ | |
Human Oversight and Monitoring | Monitor the operation for any harmful uses or impacts. This can be done through third-party feedback channels. If any harm is detected, the developer should be informed and usage controls should be implemented as needed to mitigate the harm. | ✖ | ✔ | ✖ | ✔ |
Keep a record of reported incidents that occur after deployment, and provide timely updates to ensure effective mitigation measures are in place. | ✔ | ✖ | ✔ | ✖ | |
Validity and Robustness | To ensure robustness and measure performance before deployment, employ a diverse range of testing methods across various tasks and contexts. | ✔ | ✖ | ✔ | ✖ |
Employ adversarial testing (i.e., red-teaming) to identify vulnerabilities. | ✔ | ✖ | ✔ | ✖ | |
Conduct a cybersecurity risk assessment and apply appropriate measures to reduce risks, including those related to data poisoning | ✔ | ✖ | ✔ | ✔ | |
Conduct benchmarking to measure the model's conductance against recognized standards. | ✔ | ✖ | ✔ | ✖ | |
As part of their commitment to a robust AI ecosystem, signatories also commit to several other ideals as well. This includes pledging to prioritize human rights, accessibility, environmental sustainability, and global challenges when developing and deploying AI systems. It is noteworthy the code does not require specific requirements for meaningful explainability – a prominent feature in the proposed Consumer Privacy Protection Act and General Data Protection Regulation. The code also does not allow for the possibility of “opt-out” for users even though transparency is a core principle of the code.
In essence, the Code of Conduct represents a proactive approach to AI governance. It provides a framework for organizations to act responsibly and collaboratively, serving as a crucial interim measure while awaiting the formalization of AIDA.
Publication
Alberta is set to significantly change the privacy landscape for the public sector for the first time in 20 years.
Publication
On December 15, amendments to the Competition Act (Canada) (the Act) that were intended at least in part to target competitor property controls that restrict the use of commercial real estate – specifically exclusivity clauses and restrictive covenants – came into effect.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023