Publication
Road to COP29: Our insights
The 28th Conference of the Parties on Climate Change (COP28) took place on November 30 - December 12 in Dubai.
Global | Publication | November 2024
Facing the fast-growing development of AI across the globe, particularly Generative AI (GenAI), the G7 competition authorities and policymakers (Canada, France, Germany, Japan, Italy, the UK and the US) and the European Commission met in Italy on 3-4 October 2024 to discuss the main competition challenges raised by these new technologies in digital markets. The Joint Statement released further to the Summit (alongside the AI working group’s Discussion Paper) stands both as a risk map for businesses, and a road map of the authorities’ and regulators’ plans to preserve competition, protect innovation and promote responsible AI practices.
While large digital players remain at the centre of the authorities’ attention, other digital companies should also remain watchful as the authorities are seeking innovative ways to foster competition, alongside protecting other fundamental societal values (e.g. consumer welfare and accurate information for consumers and copyright, privacy and data protection).
On the competition front, the main and significant anti-competitive behaviours targeted include:
To tackle these concerns, the Summit participants have called for international and multidisciplinary cooperation. This has already translated into a complex legal environment for digital companies, with new layers being added on top of the traditional competition law enforcement tools, including new regulations and agencies specifically dedicated to digital markets, data, and AI issues.
Although this new Joint Statement is broadly consistent with the findings outlined in previous statements (e.g. the June 2024 Joint Statement on Competition in Generative AI Foundation Models and AI Products by US and UK enforcers and the European Commission, and the French Competition Authority's (FCA) GenAI Opinion), it goes further and paves the way for global action by the whole G7 community and their peers.
We focus below on the key messages arising from the Joint Statement in terms of guiding principles for effective competition and related enforcement tools.
The AI value chain forms a complex ecosystem including many layers of businesses, from upstream IT component suppliers and cloud service providers, to foundation model developers and downstream digital companies marketing AI-related services and applications to end customers (including consumers, businesses and public authorities).
Arguably more than in any other economic sector all these layers are interrelated, and maintaining fair competition at each stage is critical to allow innovation to develop fully for the benefit of consumers, economies and societies.
AI is quickly expanding across sectors and for a variety of uses, including by businesses and public authorities for process automation, data analysis and decision-making, or by consumers for recreational use, and this is only the beginning.
Competition authorities will strive to ensure fair access to key inputs and fight against any foreclosure or exclusionary strategy by incumbent and large digital companies, especially those who enjoy a competitive advantage by being vertically integrated and active on multiple layers of the ecosystem.
AI key inputs essentially include:
A number of relevant sectors have already been subject to scrutiny by competition authorities (e.g. the US Department of Justice’s (DOJ) investigation into NVIDIA related to AI chips, the US Federal Trade Commission’s (FTC) investigation into OpenAI related to ChatGPT, the FCA’s investigation into the graphics cards sector in September 2023 and its Opinion on cloud computing in June 2023).
Digital companies should therefore take care regarding any behaviours that could amount to input foreclosure, including price fixing, supply or licensing restrictions, refusal of access, unfair and discriminatory contractual terms, self-preferencing, exclusive partnerships, tying practices, etc.
In addition, with regard to human resources, the G7 Joint Statement innovatively warns against companies potentially reinforcing a dominant position through “acquiring critical talent”, which the European Commission’s Executive Vice President (Margrethe Vestager) referred to as “predatory hiring of talent” in a speech on 30 September 2024. This echoes the concerns recently voiced by several competition authorities (e.g. the FCA’s GenAI Opinion, p.8) about deals involving the recruitment of large digital teams (e.g. Microsoft/Inflection) or strategic employees of model developers (e.g. Microsoft/OpenAI).
To maximise innovation opportunities, the G7 Joint Statement suggests several principles which raise substantial questions in terms of data privacy and security and would have implications for the value of a company’s investment:
Transparency regarding AI technologies would also be a way to manage access to data better by informing authors and content editors that their data will be used and how, as well as providing them with the right to refuse (e.g. see the FCA’s decision n°24-D-03 in the Google “related rights” case, in which Google was found to have used content from press agencies and publishers to train its AI foundation model Gemini (formerly, Bard) without notifying them or giving them the possibility to opt-out effectively).
To tackle the challenges described above, the Summit participants stated that it is key to intervene in a timely manner to maintain open, fair and contestable markets as the development of AI is still in its infancy. They also called for effective and efficient enforcement of existing regimes, including competition law and other regulatory provisions.
Competition authorities will scrutinize any partnerships and cooperation agreements in the digital sector, particularly between the major digital platforms and AI companies, which can be for a variety of purposes including research and development, technology transfer, licensing, commercial, supply, or even capital investments.
Large digital platforms have already been able to achieve significant market power in digital markets, including by leveraging network effects coupled with the vast quantities of data acquired on various neighbouring markets in which they operate. As AI relies heavily on the volume and quality of datasets available, there is a risk that large digital platforms may use their competitive advantage to expand their market power to AI-related markets and capture its development.
Competition authorities may use both the rules prohibiting anti-competitive agreements and abuses of dominance to ensure that platforms do not create unjustified restrictions of competition in their agreements with AI companies or include abusive terms and conditions in supplying their products and services to the market.
In addition to the traditional rules on abuse of dominance, certain jurisdictions have developed specific provisions for the digital sector to enable competition authorities to control the behaviour of large digital groups better. Germany, for instance, has adopted new rules (Section 19a ARC) allowing the German Federal Competition Authority (BKartA) to subject to its special supervision any companies that are designated as being of paramount cross-market significance for competition in the digital sector (i.e. to date, Microsoft, Alphabet, Amazon, Apple and Meta). In this context, AI is often used by the BKartA to demonstrate a company’s market power and justify its supervision.
Competition authorities will also continue to monitor investments by digital companies in AI companies (such as Google/Anthropic, or Microsoft/OpenAI), including acquisitions of shares, assets or even the mere hiring of talent insofar as human resources generating sales can be deemed to be an “undertaking” or “enterprise” under merger control rules (e.g. Microsoft/Inflection, cleared by the UK Competition and Markets Authority in the absence of competition concerns). However, capturing such transactions through merger control can be relatively challenging, especially if the target company does not yet generate significant sales and therefore may not be caught by merger control thresholds requiring a certain level of turnover or market share/activity (e.g. Microsoft/OpenAI for which the BKartA declined jurisdiction considering that OpenAI lacked a substantial domestic activity). In January 2024, the FTC took the proactive step of issuing orders under Section 6(b) of the FTC Act to five companies involved in significant AI investments in order to build a better understanding of AI providers’ corporate partnerships and investments.
To address concerns around such transactions, often referred to as “killer acquisitions”, the European Commission had encouraged national competition authorities (NCAs) in EU Member States to use the EU referral mechanism provided by Article 22 of the European Merger Regulation (EUMR), to allow the Commission to review deals not only where they did not meet the EUMR turnover thresholds but also where they did not reach national thresholds. This mechanism was used, for instance, in Microsoft/Inflection by several Member States which could not review the deal under their national rules. However, the EU Court of Justice (ECJ) rejected this interpretation of Article 22 in Illumina/Grail, ruling that this referral mechanism could not be used by NCAs for deals not qualifying for a national review, which led to the withdrawal of the referral requests in Microsoft/Inflection.
Competition authorities in Europe are therefore considering new ways to catch these transactions, including:
The latter two options may be challenging for competition authorities in the absence of clear evidence of anti-competitive objectives or effects. However, the European Commission and several NCAs (e.g. the FCA and the Belgian Competition Authority) have clearly stated that they will scrutinise killer acquisitions very closely and use all the tools at their disposal.
Regardless of the legal basis used (anti-competitive agreements, abuses of dominance or merger control), competition authorities already have all the necessary powers to impose or accept behavioural or structural remedies to restore competition, for instance by obliging digital platforms to grant access to key inputs or provide data under fair, reasonable and non-discriminatory (FRAND) terms (e.g. the FCA’s Google Ads case regarding Google’s conditions for advertisers on its key online search advertising website, or the FCA’s Meta Ads case, regarding the conditions imposed by Meta on competing advertisers to access its key application programming interfaces – APIs). In some cases, relatively limited action by a competition authority, such as making known it is investigating and requesting information even without reaching the stage of any formal infringement decision, may be sufficient to convince digital platforms to volunteer access to competitors.
However, given the rapid developments driven by AI-powered systems, monitoring mechanisms and associated costs will be carefully considered by competition authorities and digital companies when designing any remedy.
To supplement their existing competition law rules, the Summit participants have strengthened the importance of complementary, adaptive and forward-looking policies which include dedicated regulations for digital markets and AI. At the European level, this includes the Digital Markets Act (DMA) and the AI Act which have been adopted to fill potential enforcement gaps.5
The DMA (which entered into force on 1 November 2022 and has applied since 2 May 2023) provides for a series of do’s and don’ts type obligations that certain large digital platforms (e.g. online search engines, app stores, messenger services) designated as “gatekeepers” must comply with, such as allowing third parties to interoperate with their services and access the data they generate, refraining from any self-preferencing practices, and not preventing users from linking up to businesses outside their platforms or uninstalling any pre-installed software or app. Although the DMA does not refer to AI-powered systems, it should also be able to regulate certain AI providers’ behaviours by designating them as gatekeepers or possibly could be adapted to designate them if needed (see, e.g. in the EU Public Statement on Artificial Intelligence of the High-Level Group for the DMA and in the FCA’s GenAI Opinion, p.11, encouraging the European Commission to assess the possibility of designating companies providing services that give access to generative AI models in the cloud (MaaS) as gatekeepers specifically for those services, under the DMA).
The AI Act (which came into force on 1 August 2024) also regulates AI systems and models according to the risks they entail to the safety, fundamental rights and health of European citizens. The greater the risks, the more obligations the provider is subject to. To increase transparency, the AI Act requires operators and providers of AI systems to explain their impact on fundamental rights, and providers of general-purpose AI models to provide a list of precise information on their models to the AI authorities (the EU AI Office and national AI authorities), AI system providers and/or the public, e.g. on their training and testing processes, capabilities and limitations, and content used to train the model. In addition to the very broad definition of an AI system, the AI Act also has a very broad geographical scope of application, applying as soon as the AI system and the output is used in the EU. The aim of this extraterritorial effect is to subject foreign providers to the European regulations (such as OpenAI) (for further information on the AI Act, see our publication on Artificial Intelligence Regulation).
In addition, AI systems will generally need to comply with other European data legislation, including the GDPR, Data Governance Act and Data Act, which are technology neutral. The aim of promoting the sharing and use of data, as in the Data Governance Act or Data Act, can help smaller companies to grow and facilitate switching. However, given the strength of the digital platforms and that the processing of data by these big market players is much more extensive than by smaller companies, it remains to be seen whether these regulations will have an immediate positive effect on competition.
A similar regulatory approach has been adopted in the US, where the Biden administration issued an executive order on AI regulation on 30 October 2023, and, on 27 April 2024, the US government established a Federal AI Council comprising the heads of large digital companies (e.g. OpenAI, Microsoft, Google, Nvidia), government representatives and researchers, to provide recommendations to ensure the safe adoption of AI in the US. To this end, US agencies have used the Defense Production Act to compel AI providers to report vital information related to AI, completed risk assessments related to the use of AI in every critical infrastructure sector, established task forces to develop policies related to AI innovation in specific industry sectors, and launched a pilot of the National Artificial Intelligence Research Resource to create national infrastructure for providing access to open and proprietary AI models and AI training resources.
Competition authorities are conscious that properly addressing AI competition concerns requires highly skilled personnel. As an example, the European Commission has set up a high-level expert group on AI and explicitly recruits AI specialists to increase its internal expertise and therefore improve its monitoring of compliance with existing regulations. Furthermore, an AI Office has also been established to monitor compliance with the AI Act and obtain expertise in the field of AI.
Competition authorities will also themselves use AI-powered tools to process cases more efficiently. In Germany, for example, a separate IT “Digital Services” department was set up in August 2024 to further develop AI tools. The European Commission is developing customised AI e-discovery tools for handling competition law cases, responding to complaints and analysing stakeholder feedback.6 However, the FTC has cautioned the US Congress that AI tools are not a replacement for the human labour needed to enforce regulations and that the government should “exercise great caution in either mandating the use of, or over-relying on” AI tools to combat online harms such as fraud, fake reviews, and cyberstalking because AI tools “are still rudimentary and can result in bias and discrimination.”
Given the global reach of AI and digital markets, the Summit participants have reaffirmed their commitment to combine their efforts to address and adapt to AI-related challenges. This is not only within the G7, but also in other international fora, such as the International Competition Network (ICN), the OECD, and the UN Trade and Development.
The Summit participants consider a multidisciplinary approach to be essential due to the importance of AI in many areas of life. In view of the importance of data, cooperation with data protection authorities will be critical, as underlined by the ECJ in Meta/BKartA (confirming that NCAs can analyse a dominant firm’s GDPR compliance when assessing an alleged abuse of dominance, but cannot depart from a decision by the competent national data protection authority when doing so). In certain jurisdictions, NCAs have already entered into formal cooperation agreements with data protection authorities (e.g. in France, Joint Statement of 12 December 2023 by the FCA and the CNIL).
In addition, interdisciplinary departments are being set up to monitor compliance with the relevant regulations and to strengthen targeted exchanges of information between different departments. For example, the DMA is enforced by a joint team from the European Commission’s DG COMP and DG CONNECT departments. Under the framework of the DMA, a High-Level Group has also been established, which is made up of various bodies and networks, such as the Body of the European Regulators for Electronic Communications, the European Data Protection Supervisor, the European Data Protection Board, European Competition Network and the Consumer Protection Cooperation Network.
Digital companies will increasingly be confronted by a complex regulatory environment and a variety of enforcement tools which competition authorities and other regulatory agencies will not hesitate to use widely, and risking significant sanctions and other adverse consequences for infringements.
Relevant companies should therefore carefully identify any potential competition concerns in AI-related projects and seek to manage and mitigate such risks as early and quickly as possible.
Publication
The 28th Conference of the Parties on Climate Change (COP28) took place on November 30 - December 12 in Dubai.
Publication
While country risk cannot be avoided in cross-border transactions entirely, it can be effectively mitigated through careful transaction structuring and tailored contractual protections.
Publication
Miranda Cole, Julien Haverals and Emma Clarke of our Brussels/ London offices are the authors of a chapter on procedural issues in merger control that has been published in the third edition of the Global Competition Review’s The Guide to Life Sciences. This covers a number of significant procedural developments that have affected merger review of life sciences transactions.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023