Artificial intelligence (AI) systems process and analyze vast amounts of personal information. Unchecked, significant privacy risks may arise within these systems and their related development and deployment processes. While the public clamours for regulation to address these risks, organizations developing and deploying AI systems are generally resistant to AI-focused regulation, for fear of regulation curtailing the algorithmic power of these AI systems.
The Office of the Privacy Commissioner of Canada (OPC) has waded into the fray with its recently released set of 11 proposals, titled “Proposals for ensuring appropriate regulation of artificial intelligence”1 (Proposals), which consider significant changes to the federal Personal Information Protection and Electronic Documents Act, (PIPEDA).2
Aside from privacy risks in AI systems, the OPC recognizes AI systems’ potential to promote unlawful bias and discrimination when generating insights or making decisions affecting individuals. To address this, the OPC proposes formulating rules under PIPEDA targeted at AI system development and deployment, similar to those in the European General Data Protection Regulation (GDPR).3
The prescriptive nature of the Proposals, coupled with rules directed to a specific technology, are a shift in the OPC’s traditional positioning of PIPEDA as “principles based” and “technology neutral.” PIPEDA’s current principles-based approach is often lauded by privacy practitioners as permitting organizations flexibility to address privacy and security legal compliance through a combination of technological and organization measures. The risk of a rules-based approach, if adopted legislatively, is the loss of the current flexibility under PIPEDA and an accompanying risk of overly circumscribing AI system development and deployment.
It is imperative for experts and organizations developing and deploying AI systems to weigh in on the Proposals. The OPC’s public consultation on the Proposals ends on March 13, 2020.
Shifting from a technology-neutral stance
A complete overview of the Proposals is beyond the scope of this update. However, it is interesting to note how the OPC has dealt with the AI-specific concepts such as explainability and transparency:
- Right to an Explanation and Increased Transparency (Proposal #4). The OPC proposes that the openness principle of PIPEDA should include a right to explanation that would provide individuals interacting with AI systems the reasoning underlying any automated processing of their data, and the consequences of such reasoning for their rights and interests.
It is interesting to note the OPC raises the right of explanation or explainability without defining that term. Experts often debate about how to satisfy explainability (i.e., the extent to which the internal working of the AI system can be explained). Furthermore, the term “explainability” is often used interchangeably with interpretability (i.e., the extent to which cause and effect can be observed in an AI system) leading to confusion as to how to achieve explainability.
It is also important, however, to note that more complex AI algorithms do not easily support explainability and there will have to be a trade-off between the right to an explanation and AI system performance (e.g., where a more complex algorithm provides better predictions that are less “explainable” in contrast to a simpler algorithm that is “explainable”).
As well, the OPC’s support for transparency is particularly troubling in requiring an organization to lay bare its algorithms or training methods, as this will likely interfere with an organization’s intellectual property rights. AI systems are protected by a combination of intellectual property rights, including trade secrets, which lose all value when made public. Given the complexity of AI systems, it is also not clear what benefit transparency will provide an individual in understanding how his or her personal information will be processed. Rather than transparency, a more nuanced metric would be requiring an organization to provide reasons why a more complex, less “explainable” model is chosen over one that lends to explainability.
Changing the enforcement landscape
The OPC also included technology-neutral proposals that are in line with a global trend of enhanced enforcement, one example being:
- Binding Orders and Financial Penalties (Proposal #11). The OPC notes that the significant risks posed to privacy and human rights by AI systems demand a proportionally strong regulatory regime. Accordingly, the OPC proposes amendments to PIPEDA to empower the OPC to make binding orders and impose consequential penalties for non-compliance with the law. As other jurisdictions move to stronger enforcement regimes, PIPEDA currently includes limited enforcement power for the OPC. It is expected that any future amendment to PIPEDA will likely include stronger enforcement powers for the OPC.
Call for written submissions
If you have comments on the Proposals, suggestions for changes to PIPEDA, or recommendations for sections you would like to see reviewed, please contact us. We would be happy to submit written submissions to the OPC for its consideration.
We will continue to follow and report on developments on these proposed changes to PIPEDA.