On October 5, the Minister of Innovation, Science and Industry (ISED) wrote a letter to the Standing Committee on Industry and Technology proposing amendments to Artificial Intelligence and Data Act (AIDA), which was introduced as part of Bill C-27 in June 2022. Further information on AIDA can be found in our previous update.
The letter suggests amendments in the following areas:
- specifying roles and obligations for different actors in the AI value chain;
- clarifying obligations for generative general-purpose AI systems, like ChatGPT;
- defining classes of systems that would be considered high impact;
- strengthening and clarifying the role of the proposed AI and Data Commissioner; and
- aligning with the EU AI Act as well as other advanced economies.
The voluntary Code of Conduct published by ISED covers the first two areas as set out in our previous update. Accordingly, this update will focus on the last three areas.
High-impact systems
Proposed section 71 of AIDA requires assessing an AI system to determine whether it is a “high-impact system.” This assessment of “high-impact system” is important as it is the threshold that triggers most obligations (i.e., obligations of assessing, mitigating and monitoring risk, keeping records, publishing description of AI systems in use, and providing notice of harm). However, AIDA does not define “high-impact system” creating further uncertainty as to which AI systems would actually be subject to these obligations.
In this letter, ISED does not suggest a definition of “high-impact systems” but instead proposes seven classes to determine whether a system is high impact or not. These classes are:
- Employment – when an artificial intelligence system is used in order to determine employment, recruitment, referral, hiring, remuneration, promotion, training, apprenticeship, transfer or termination.
- Provision of Service – when an artificial intelligence system is used to determine whether services should be provided, the type and cost of services, and how these will be prioritized.
- Biometric Information – when an artificial intelligence system is used to process biometric information in order to identify an individual (except for authentication with consent) or assess an individual’s behaviour or state of mind.
- Online Content – using an artificial intelligence system in moderating content on online communication platforms, such as search engines and social media services, or in prioritizing the presentation of such content.
- Healthcare – when used in matters relating to healthcare or emergency services. This excludes a use referred to in any of paragraphs (a) to (e) of the definition of “device” in section 2 of the Food and Drugs Act that is in relation to humans.
- Courts – when a court or administrative body uses an artificial intelligence system to make a decision regarding an individual involved in proceedings before the court or administrative body.
- Law enforcement – using an artificial intelligence system to support a peace officer, as defined in section 2 of the Criminal Code, in carrying out law enforcement powers, duties, and functions.
The letter also states that this list can evolve and be modified by the Governor-in-Council as technology changes. If these amendments are adopted, then they would replace the proposed requirement for high-impact system assessments of section 72 of AIDA.
Alignment with EU AI Act and Organisation for Economic Co-operation and Development (OECD)
ISED has recommended that AIDA align with international frameworks such as the EU AI Act as well as the OECD AI Principles. ISED encourages adopting OECD’s definition of “artificial intelligence,” which is:
“a technological system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions.”
ISED has also recommended that sections of AIDA imposing risk mitigation measures (e.g. section 8 requiring measures to identify, assess and mitigate the risks of harm or biased output, or section 9 requiring monitoring compliance with the mitigation measures) be replaced with new sections that better clarify responsibilities for organizations developing, managing and putting into service high-impact systems.
These responsibilities have also been defined in the voluntary Code of Conduct. However, a new category of entity has been mentioned in the letter – “persons placing on the market or putting into service a high-impact system.” It remains to be seen what responsibilities will be placed on such players.
The letter also states any organization that “substantially modifies” a high-impact system will be responsible for ensuring that pre-deployment requirements are met. “Substantially modifies” has not been defined in the letter.
The letter further sets out that organizations that conduct “regulated activities”3 (a defined term in AIDA) must prepare an accountability framework that the commissioner can request at any time. The framework will entail:
- the duties and obligations, as well as the hierarchy of reporting, for all staff who assist in making the system available for use or who support its operations management;
- policies and procedures regarding risk management of the system;
- policies and procedures on responding to individuals’ complaints about the system;
- policies and procedures about the data used by the system;
- the training provided to staff related to the system and the corresponding training materials;
- anything else prescribed by regulation.
Enhancing and clarifying AI and Data Commissioner (AIDC) role
The letter also stated that ISED will support amendments that will further enhance and clarify AIDC’s role and powers.
Key takeaways
ISED’s proposed amendments attempt to bring clarity and enhancement to AIDA. Organizations should prepare for the upcoming legislation by setting up a process to capture the different ways in which their organizations might be designing, developing or making available for use an AI system. Organizations should lean on previously existing vendor risk management or security postures already developed. In parallel, organizations can also create an accountability framework that can guide the development of an AI compliance program.