The AI Act (AIA) is the EU’s first comprehensive legislation setting harmonised rules regulating Artificial Intelligence, and will impose new and significant obligations on those developing and using AI inside and outside the EU. The UK is also exploring the future regulation of AI.
Is the EU text finalised yet? When will it apply?
Yes, the AIA was entered into the EU’s Official Journal on 12 July 2024.
Prohibitions (outlined below), as well as provisions requiring organisations to promote AI literacy, will apply from 2 February 2025. Provisions on general-purpose AI, as well as will apply from 2 August 2025. Most other provisions will apply from 2 August 2026.
Who does it apply to?
The AIA applies to all those who develop, distribute, and use AI systems that will affect people in the EU.
Most of the obligations under the AIA fall on providers, vendors who develop the AI systems. Deployers who use the AI systems must also comply with some obligations around their use, and some obligations also fall on importers and distributors who put AI systems on the market or into service in the EU.
Providers supplying the EU market must comply regardless of whether they are located in the EU. Providers or deployers outside the EU must also comply where the output of the AI systems is to be used in the EU.
What are the key provisions?
The AIA provides tiered regulation of AI systems:
Prohibited AI practices. The AIA prohibits certain AI practices, such as subliminal/ manipulative/ deceptive techniques; exploitation of the vulnerable; biometric categorisation of sensitive/protected characteristics; social scoring based on an unrelated social context or disproportionate to the social behaviour; real-time remote biometric identification in publicly accessible spaces for law enforcement; creation of facial recognition databases through untargeted image scraping; and emotional recognition in the workplace or educational contexts.
High-risk AI systems. These are systems that create health and safety risks (e.g. AI systems intended to be used as safety components of products like medical devices or protective equipment), risks to fundamental rights (e.g. AI used in employment, to evaluate creditworthiness, or for risk assessment and pricing in life and health insurance). Systems are exempted where there is no “significant risk” of harm to the health, safety or fundamental rights of natural persons. An AI system is always considered high-risk if it performs profiling of natural persons.
Providers of high-risk systems will be subject to numerous obligations, including implementing risk management and quality management systems to identify and analyse known and foreseeable risks, undergoing the required conformity assessment procedure as well as serious incident notification obligations.
Providers and (to the extent they control an AI system) deployers of high-risk AI systems will be required to assign human oversight to the system. They must both keep event logs, and the deployer must notify the provider of the system if there are any serious incidents.
Deployers of such systems that are bodies governed by public law or private operators providing public services, or deployers, or where they are deploying an AI system to assess creditworthiness or carry out pricing or risk assessments for life and health insurance, will be required to undertake a fundamental rights impact assessment.
General purpose AI models (GPAIs). These are AI models trained on broad data at scale, which display significant generality of output and competency to perform a wide range of tasks. They are typically systems intended to perform functions like image and speech recognition, audio and video generation, pattern detection, question answering and translation which may be used in high-risk AI systems.
Providers of GPAIs must draw up and maintain technical documentation of the model, training and testing process and results of its evaluation and energy consumption. They must provide such technical documentation to providers of AI systems who intend to integrate a GPAI in their system.
Providers must also put in place a policy to respect EU copyright law.
A GPAI will be classified as having systemic risk if it (a) has “high-impact capabilities” or (b) the Commission makes a decision designating it as such. A GPAI will be presumed to have high impact capabilities if the cumulative amount of compute used for its training measured in floating point operations is greater than 10^25.
In addition to the obligations on GPAI providers, providers of GPAIs with systemic risk must provide evaluation strategies and results, undertake adversarial testing, mitigate systemic risks, document and disclose serious incidents and ensure adequate levels of cyber security.
Transparency obligations. Transparency obligations apply under the AI Act in some situations where lawmakers considered it important to flag that content was AI generated or a user was interacting with an AI system. These obligations are sometimes described as applying to AI systems that present ‘limited risk’, though in practice they cut across other categories and can apply to high-risk AI systems, general purpose AI models, or AI systems that would otherwise be considered ‘minimal risk’. Minimal risk AI systems. These systems include applications like AI-enabled video games or spam filters. The AIA encourages providers of these systems to apply voluntary codes.
Minimal risk AI systems. These systems include applications like AI-enabled video games or spam filters. The AIA encourages providers of these systems to apply voluntary codes.
Does the UK have anything similar?
On 6 February 2024, the UK government published its response to the AI White Paper consultation published by the Department for Science, Innovation and Technology in March 2023. The government proposed to use existing laws to regulate AI, with regulators leading to implement a "pro-innovation" “outcomes-based” approach to AI regulation.
What are some of the commercial impacts of proposed EU AI laws?
Businesses will need to consider establishing an inventory of all AI systems used and their risk classifications. They will also need to consider putting in place a comprehensive AI risk-assessment and management programme. This should be put in place alongside together with guidelines and instructions for the responsible use and, if relevant, development of AI systems, which should address accuracy and safety, fairness and ethical use, explainability, oversight and security issues. Triggering an AIA category will mean enhancing the rigour and documentation around these processes and tracking to standards approved by the Commission, wherever possible, but the AIA requirements reflect the principles that most regulators would expect to be deployed (perhaps with less rigour) for any substantial AI application. Robust AI governance is necessary to comply with the many existing regulatory obligations that apply to AI, including data protection, competition, employment, and financial services regulation.
Where can I learn more?
Click here to download our full report on the EU/UK Digital and Cyber Strategies.
Subscribe and stay up to date with the latest legal news, information and events . . .