On July 12, 2024, the Artificial Intelligence (AI) Regulation, referred to as the AI Act was published in the Official Journal of the European Union. Its official application date is 2 August 2026 but the AI Act also provides for different application dates:
- Earlier application as of 2 February 2025 with respect to the prohibition of certain AI practices and as of 2 August 2025 for inter alia general purposes AI (including the ones with systemic risk), as well as the penalties;
- Delayed application to 2 August 2027 for some provisions regarding the high-risk AI system.
What is the purpose of the AI Act?
This new regulation aims to establish harmonised rules to ensure that AI systems in the EU respect fundamental rights and ensure a high level of protection of health and safety, while also fostering investment and innovation in the field of AI.
Who does it apply to?
The AI Act directly affects businesses operating within the EU, whether they are providers (i.e. those developing the systems), users (referred to as “deployers”), importers, distributors, or manufacturers of AI systems. The legislation provides clear definitions for the various actors involved in AI system or practices and holds them accountable for compliance with the new rules. This means that all stakeholders must ensure that their AI practices comply with the requirements outlined in the AI Act.
The AI Act also applies extraterritorially to companies not established in the EU. Providers must comply when placing AI systems or general purpose AI models on the market or putting them into service in the EU, regardless of where they are established. Similarly, importers, distributors, and manufacturers serving the EU market are also caught. Providers and deployers are also caught where the output of their AI systems is used in the EU, regardless of where they are located.
What are the requirements?
The regulatory framework defines four levels of risk for AI systems:
- The AI Act prohibits some types of AI considered to present an unacceptable risk (applicable in February 2025), such as emotion recognition systems in the workplace and in education or inappropriate use of social scoring.
- For types of AI deemed to be “high-risk”, the AI Act imposes a wide range of obligations on providers, including risk assessments, governance, and maintaining documentation, public registration and conformity assessments and declarations, as well as and ensure resilience against unauthorized alterations. Deployers also need to comply with much more limited obligations such as implementing technical and organizational measures to ensure the provider’s restrictions on use are followed and providing appropriate and competent human oversight. AI systems deemed to pose a high risk must also undergo a conformity assessment before being placed on the market or put into service. Post-market monitoring and reporting involve establishing a monitoring system and reporting serious incidents to market surveillance authorities.
- Additionally, there are various transparency obligations that require individuals to be informed where they are interacting with AI systems or AI generated content. These were initially framed as obligations for “limited risk” AI systems, though in practice these transparency obligations cut across the categories.
- Many types of AI systems will be considered “minimal risk” and will not be subject to significant obligations under the AI Act itself, though existing regulatory frameworks (such as data protection, employment, financial services, and competition) continue to apply.
Meanwhile, providers of “general purpose AI models”, such as large language models, will need to meet requirements designed to allow providers and deployers incorporating them into AI systems to better understand their capabilities and limitations and to address other inherent issues such as potential infringements caused by their training (the latter point addressed through putting in place a policy to respect EU copyright law and a summary of the content used to train the model). General purpose AI models posing “systemic risk” must comply with additional obligations, such as documenting and disclosing significant incidents and mitigating such systemic risks.
What are the sanctions?
The penalties for non-compliance with the AI Act are significant. They range from €750.000 to €35.000.000 or from 1 % to 7% of the company's global annual turnover, depending on the severity of the infringement. Therefore, it is crucial for companies to ensure that they fully understand the provisions of the AI Act and comply with its requirements to avoid such sanctions.
What should you do to prepare?
Companies must establish appropriate governance and monitoring measures to ensure that their AI systems adhere to the AI Act.
Companies must prepare and ensure that their AI practices comply with these new regulations. To initiate the process of compliance with the AI Act, companies should begin by compiling an inventory of their current AI systems and models. Organisations that do not yet have an inventory should assess their current status to understand their potential exposure. Even if they are not currently using AI, it is highly likely that this will change in the coming years. Initial identification can begin from an existing software/applications catalogue or, in its absence, through surveys conducted among various departments, in particular IT and risk departments.
Once the inventory is established, organisations should:
- classify AI systems and models according to risk levels and the organisation’s role;
- raise awareness;
- establish where key stakeholders will fit in and what information they need;
- assign responsibilities as required;
- create a library of legal risks with playbooks to allow some evaluation by non-experts;
- stay up-to-date on developments; and
- establish an ongoing governance processes.