Publication
Real Estate Focus - December 2024
December has been a very busy month, with a flurry of new government policies and consultations.
Global | Publication | April 2021
On April 21, 2021, the EU Commission adopted a proposal for a regulation (the AI Regulation) on “artificial intelligence systems” (AI systems), which it describes as “the first ever legal framework on AI.” The AI Regulation will impose significant obligations impacting businesses across many, if not all, sectors of the economy. The AI Regulation will prove controversial, touching off a legislative battle lasting at least until 2022.
The proposed AI Regulation will join other ambitious EU initiatives in the digital sector, such as the Data Governance Act, Digital Services Act and Digital Markets Act, currently working their way through the EU legislative process, as well as the forthcoming Data Act and the ongoing reform of EU antitrust policy. Some of the AI Regulation provisions read across to related provisions in other measures - for example, the practices prohibited for all AI systems (see below) are related to the Digital Services Act measures to combat harmful content on the Internet.
The AI Regulation defines “AI systems” broadly and imposes tailored obligations on actors at different parts of the value chain, from “providers” of AI systems to manufacturers, importers, distributors and users. The AI Regulation imposes especially strict obligations in relation to “high-risk AI systems.”
On the other hand, the AI Regulation includes a number of provisions intended to promote the development and uptake of AI systems in the European Union (EU). The AI Regulation also creates a new regulatory framework, with a European Artificial Intelligence Board overseeing and co-ordinating enforcement. The AI Regulation envisages a two-year period for application following adoption and publication of the final regulation, meaning that the new requirements could apply as early as 2024.
On Thursday July, 15, we will be hosting a webinar where members of our multi-disciplinary AI practice (including a computer scientist specializing in AI) will discuss the proposed AI Regulation. Click here for more information and to register.
The AI Regulation defines AI systems as “software that is developed with one or more of [certain] approaches and techniques . . . and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” The definition is likely to be subject to close scrutiny and possible amendment, but the Commission clearly intends to cast a wide net, capturing not only AI systems offered as stand-alone software products, but also products and services relying on AI services directly or indirectly.
The techniques and approaches leading to software being identified as an AI system (listed in Annex I) include:
Although this definition may be questioned from a technical perspective, again the intention seems to be to cast a wide net. Interestingly, the definition of AI systems presented in Annex I is limited to existing methods, and does not allow for future innovations in how AI systems operate (for example, by the use of methods analogous to non-human forms of intelligence).
What are “high-risk AI systems”? AI systems identified as “high-risk AI systems” are those intended to be used as “safety components” of products, or which are themselves products, covered in EU legislation listed in Annex II (e.g., on machinery, toys, lifts/elevators, radio equipment, pressure equipment, marine equipment, cableways, gas-burning appliances, and medical devices), and AI systems listed in Annex III (those related to biometric identification and categorization of natural persons; management and operation of critical infrastructure; education and vocational training; employment, works management and access to self-employment; access to and enjoyment of essential private services and public services and benefits; law enforcement; migration, asylum and border control management; and administration of justice and democratic processes). “Safety components” are defined as components of a product or system that fulfil a safety function and whose failure and/or malfunctioning endangers the health and safety of persons and property. |
The AI Regulation will prohibit certain practices for all AI systems as violating EU values and fundamental rights. These include placing on the market or into service, or using, AI systems that:
The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement is permitted only insofar as strictly necessary for specified objectives, taking into account certain elements. Such uses must also comply with safeguards and conditions, in particular as regards temporal, geographic and personal limitations.
As mentioned, the practices prohibited in relation to all AI systems should be viewed in connection with proposed measures to address harmful content on the Internet, notably in the Digital Services Act proposal.
Requirements for high-risk AI systems include implementing a risk management system for the entire life cycle of a high-risk AI system to (among other things) eliminate or reduce risks through adequate design and development, implementing mitigation and control measures, providing information and training, and conducting testing.
The AI Regulation further imposes extensive obligations in relation to:
The obligations imposed on businesses distinguish between providers of high-risk AI systems, product manufacturers, authorised EU representatives appointed by non-EU providers, importers, distributors, users, and other third parties involved in the AI value chain:
The AI Regulation also imposes obligations on users of high-risk AI systems. Users:
Title III also includes extensive procedural requirements relating to the bodies responsible for performing conformity assessments and possible challenges to their decisions. Products including AI systems as safety components, or AI system safety components that are themselves products, must undergo third-party testing known as a “conformity assessment” before being placed on the market or put into service. The use of harmonised technical standards is encouraged to facilitate conformity assessments.
What information must AI system providers provide on registration? High-risk AI systems will be registered, and the EU will maintain a database including information inputted by AI system providers. This information, as set out in an annex, includes:
|
Data protection considerations While the AI Regulation highlights the importance of preserving data protection and privacy, the proposal has received serious criticism from privacy activists who argue that the Commission has not gone far enough. Some privacy commentators contend that the AI Regulation does not provide sufficient legal limits on the uses of AI. They say that it relies too much on developers self-assessing, and self-regulating, their own compliance and mitigation measures. It is therefore likely that privacy concerns will be a hotly discussed and contested issue as the AI Regulation makes its way through legislative process. |
Certain AI systems are subject to additional transparency obligations. These include AI systems intended to interact with natural persons, systems for emotion recognition or biometric categorisation, and systems that generate “deep fakes.”
In addition to the extensive obligations imposed on the development, distribution and use of AI systems, the AI Regulation contains a number of measures intended to support innovation in this area. These include support for regulatory sandbox schemes, reduction of regulatory burdens for small and medium-sized enterprises and startups, and the creation of digital hubs and testing facilities.
The AI Regulation creates a fully-fledged regulatory and enforcement regime overseen by a European Artificial Intelligence Board working with national supervisory authorities entrusted with ensuring the application and implementation of the regulation.
The AI Regulation also provides for an EU database for stand-alone high-risk AI systems. Information will be entered by high-risk AI system providers and accessible to the public. High-risk AI system providers will also be required to establish post-market monitoring systems to collect, document and analyse data on the performance of high-risk AI systems and their compliance with the regulation.
Providers of high-risk AI systems will be required to report on serious incidents and malfunctioning of those systems immediately after establishing a link between the system and the incident. National supervisory authorities will report to the Commission on their market surveillance activities and co-ordinate those activities. Member States objecting to a measure taken by another Member State, or the Commission, can trigger a “Union safeguard procedure.” National authorities can also require operators to take appropriate measures where an AI system presents a risk to health or safety or fundamental rights even though in compliance with the regulation.
The Commission and Member States will encourage the creation of codes of conduct to foster the application of requirements applicable to high-risk AI systems to other AI systems based on appropriate specifications and solutions.
An elaborate system of penalties will be available for infringements of the AI Regulation. The highest fines may be up to 6% of total worldwide annual revenue for non-compliance with the prohibition of practices for all AI systems and data and data governance requirements.
The AI Regulation represents a major legislative initiative that may serve as a template for similar measures around the globe. It imposes broad obligations in relation to all AI systems from providers to users; prohibits certain AI practices entirely; imposes special obligations in relation to high-risk AI systems; and creates a new framework of regulators and testing, monitoring and compliance processes. The broad definition of AI systems ensures that the AI Regulation will have a significant impact in all sectors of the economy, not only digital.
Before it can be adopted, however, the AI Regulation will join an already crowded digital docket and must pass through a complex and contentious legislative process. The AI Regulation is consistent with the broad outlines of EU policy set out in the Commission’s February 2020 AI strategy paper, so there are few, if any, complete surprises. However, the broad and potentially vague definitions highlight the difficulty of translating general principles into enforceable legislation.
Similarly, the extensive obligations imposed on providers, manufacturers, importers, distributors and users of AI systems will be daunting for all but the largest companies, and the new governance and enforcement regime will add to an increasingly dense regulatory forest in Europe. These obligations will likely shape regulatory expectations in relation to non-high-risk AI systems. As the AI Regulation and related measures move through the EU legislative process, hopefully the EU institutions will streamline and rationalise the new regulatory frameworks to minimise duplication and clarify areas of responsibility.
Publication
December has been a very busy month, with a flurry of new government policies and consultations.
Publication
On 13 December 2024 the Financial Conduct Authority (FCA) published Primary Market Bulletin 53 (PMB 53) which includes confirmation of the final form of two new, and one amended, sponsor-related technical notes previously consulted on in PMB 50, and a consultation on various proposed changes to the technical and procedural notes in the FCA’s knowledge base.
Publication
The Regulator has provided a link to its dashboard webinar held on November 26, 2024, which it urges scheme trustees to watch. The Money and Pensions Service also collaborated with the Pensions Dashboard Programme to host a “town hall” dashboard event on December 2, 2024.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023