Publication
Generative AI: A global guide to key IP considerations
Artificial intelligence (AI) raises many intellectual property (IP) issues.
Global | Publication | July 2017
On June 21-23, the OECD held a roundtable on the theme of “Algorithms and Collusion,” as part of a wider work stream on competition in the digital economy. The OECD roundtable reflects a shift in the debate over the antitrust implications of big data from concerns about the potential for companies to hoard big data, creating barriers to entry and market power, to concerns about companies with access to the same or comparable big data using algorithms to collude.
The OECD’s background paper (the OECD note) and the other papers prepared for the roundtable constitute the most authoritative discussion of algorithms and collusion to date. These materials discuss the way in which algorithms may change market structures and behavior in ways not contemplated by traditional antitrust thinking, how antitrust authorities can address these issues and a range of potential regulatory responses. The OECD’s proposals are wide-ranging and potentially very significant; for instance, expanding the concept of “agreement” subject to antitrust rules, searching out anti-competitive conduct in new markets, expanding merger review especially in relation to coordinated or conglomerate effects and considering new regulatory initiatives.
So far, the individual authorities participating in the roundtable seem unpersuaded of the need for dramatic reforms of the type proposed by the OECD. The EU background paper comments that “pricing algorithms can be analysed by reference to the traditional reasoning and categories used in EU competition law” (para. 35). On the other hand, these submissions point to a number of specific practices not discussed in detail in the OECD note, and all participating authorities stressed the importance of algorithm issues and the need for further study.
This briefing summarizes the issues and proposals discussed at the roundtable, following the structure of the OECD note for convenience. Different perspectives or comments by the individual jurisdictions participating in the roundtable are also discussed below.
As noted, the OECD roundtable reflects a shift in the debate over the antitrust implications of big data. Much of the thinking about big data issues, as summarized in the May 2016 Franco-German study on competition law and data (the Franco-German Study), related to the risk that the accumulation of big data would lead to barriers to entry and market power, with the potential for exclusionary abuses and implications for merger review involving data-rich companies. The risk that big data could facilitate express or tacit collusion was discussed briefly in the Franco-German study and mentioned only in passing in the European Commission’s sector inquiry on e-commerce. For a detailed discussion of the issues raised in the Franco-German Study, see here, and for a discussion of the Commission’s e-commerce sector inquiry, see here.
Big data-related risks relating to collusion have since become a key focus of debate, driven to a large extent by Professors Ariel Ezrachi’s and Maurice Stucke’s November 2016 book, “Virtual Competition,” and other writings. The OECD roundtable, attended by Prof. Ezrachi, is a testament to the increasing importance antitrust authorities attach to these issues. The nine official papers discussed at the roundtable - a background note by the OECD Secretariat, separate submissions by the EU, the UK, the US, Italy, Russia, Singapore, the Ukraine and the Business and Industry Advisory Committee (BIAC) - together with three academic submissions, including one by Professors Ezrachi and Stucke, represent the most authoritative analysis of these issues to date.
The OECD note summarizes the characteristics, uses and benefits of algorithmic pricing, pointing out that “recent developments in artificial intelligence and machine learning have brought algorithms to a new level, allowing computers to solve complex problems, make predictions and take decisions more efficiently than humans.” Algorithms use predictive analytics to estimate demand, forecast price changes, predict customer behaviour and preferences, assess risks and forecast shocks such as the entry of new firms, variations in exchange rates or even natural disasters. They can thus improve companies’ decision-making and enable them to plan more efficiently and to develop more innovative and customized services. Algorithms can also help optimise business processes, reducing production and transaction costs, segmenting consumers or setting optimal prices.
Algorithms’ use for predictive analysis and optimization of business processes has multiple practical applications that are observed across many industries, such as fraud prevention, supply-chain optimization, targeted advertising, product recommendation, corporate security and dynamic pricing. In addition, industry-specific applications based on machine and deep learning principles can bring breakthrough innovations and revolutionize markets. The widespread adoption of algorithms by businesses is not only transforming the way in which companies operate and interact with each other, but it is also significantly affecting the evolution of markets towards global digitalization, triggering a domino effect that promotes a wider use of algorithms.
The growing use of algorithms puts strain on the traditional antitrust concepts of anti-competitve agreements, concerted practices and tacit collusion. While algorithms can be used in the implementation of an agreement that would be illegal under traditional antitrust principles, the use of algorithms may change competitive behaviours, potentially enabling firms to replace explicit collusion with tacit co-ordination without the need for an agreement or concerted practice.
A finding of an illegal agreement traditionally requires some proof of direct or indirect contact showing that firms have not acted independently. Tacit collusion normally falls outside antitrust law, even though certain market conditions (i.e., transparent markets with few sellers and homogenous products) may enable firms to engage in supra-competitive price strategies without any such contact. Indeed, one of the main risks of algorithms is that they expand the grey area between unlawful explicit collusion and lawful tacit collusion, allowing firms to sustain profits above the competitive level more easily without necessarily having to enter into an agreement.
The OECD note discusses ways in which computer algorithms may change the structural and supply characteristics of digital and some traditional markets to make them more prone to collusion.
Two of the most important structural characteristics that affect the risk of collusion are the number of firms and barriers to entry. A large number of firms not only makes it harder to identify a “focal point” for co-ordination, but it also reduces the incentives for collusion, as each player would receive a smaller share of any supra-competitive gains that a collusive arrangement could extract. Similarly, in the absence of entry barriers, collusion can’t be sustained over time, as any increase in profits would increase the incentives to deviate from the collusive equilibrium and attract new entrants.
It is unclear how algorithms may affect these two structural characteristics. Some of the industries where algorithms are used to set dynamic prices, segment consumers or improve product quality – such as search engines, online marketplaces, discount stores, booking agencies, airlines, road transport and social networks - have a small number of large players. However, many of these industries are also characterised by natural barriers to entry, such as economies of scale, economies of scope and network effects, which allow companies to grow, collect large amounts of data and develop more accurate algorithms. On the other hand, algorithms may also lower barriers to entry, for example by allowing potential entrants to analyze market opportunities and reduce uncertainty.
In addition, algorithms may reduce the importance of the number of competitors in the market as a relevant factor for collusion. Collusion is more easily sustainable in traditional markets if there are few competitors, because a small number of firms can more easily agree on the terms of co-ordination and monitor and punish deviations. Algorithms could allow coordination, monitoring and punishment to take place in less concentrated markets.
Two other important structural characteristics making industries more prone to collusion are market transparency and frequency of interaction. In transparent markets, companies can more easily monitor each other’s actions, and frequent interactions enable them to punish deviations. The OECD note indicates that algorithms likely enhance both factors for collusion. In regard to transparency, the use of algorithms requires the collection of detailed real-time data, so companies have an incentive to develop automated methods to collect and store data, ready to be processed, without the need for human action, for instance through online cookies, smart cards, bar codes, voice recognition, radio frequency identification and other technologies. As a result, market participants can observe rivals’ and consumers’ actions in real-time. In regard to frequency of interaction, algorithms allow firms to make business decisions such as price adjustments very quickly, allowing for an immediate retaliation to deviations from collusion.
In addition, the Italian Competition Authority noted that pricing algorithms may make collusion more likely in smaller, more fragmented markets, such as many markets in Italy. The Italian Competition Authority also noted that an analysis of algorithms’ effects on the susceptibility of markets to collusion needs to take account of the relationship between intra-platform and inter-platform online competition.
Apart from their structural implications, the OECD note recognizes algorithms’ possible effects on demand and supply factors. The OECD note acknowledges that the use of algorithms can enable consumers to improve their decision-making processes and to buy products more cheaply, but comments that firms’ use of algorithms is more likely to affect supply factors than demand factors.
As regards supply factors, the OECD note points out that innovation tends to reduce the value of collusive agreements, as well as the ability of less innovative firms to retaliate. Since algorithms are an important source of innovation, companies may face competitive pressure to develop the best-performing algorithm. Similarly, algorithms may help companies differentiate their services or the production process, leading to cost asymmetry. These factors may make collusion harder to sustain, counterbalancing the enhanced risk of collusion resulting from more transparent markets where companies react fast.
In summary, it is unclear whether algorithms increase or reduce the likelihood of collusion. The OECD suggests that in transparent markets where firms can adjust their decisions very fast, collusion may be sustainable regardless of factors such as the number of firms or the risk that innovations will disrupt the market in the future. The OECD cautions that, even if collusion is theoretically an equilibrium strategy, that does not mean that collusion will occur in practice. Still, according to the OECD, there is a clear risk that current changes in market conditions may facilitate anti-competitive strategies, such as collusion and other market manipulations.
The OECD note proceeds to discuss the risks associated with different ways firms can use algorithms, distinguishing between monitoring, parallel, signaling and self-learning algorithms.
The most obvious role of algorithms as facilitators of collusion is in monitoring competitors’ actions in order to enforce a collusive agreement through the collection of data on competitors, screening for potential deviations and programming retaliations. For this purpose, data needs to be aggregated in an easy-to-use format that can be regularly updated, for instance by price comparison websites that receive data directly from online companies or use web scraping to extract data from websites. As new automatic data collection processes become available, these technologies may be extended from electronic commerce to traditional markets.
Data collected automatically can be monitored and combined with pricing algorithms that automatically retaliate to deviations from an agreed price. For instance, companies may program pricing algorithms to execute trigger strategies, which consist in setting the agreed price as long as all the rivals do the same, but reverting to a price war as soon as any firm deviates. Since algorithms are very fast at detecting and punishing deviations, however, firms would not have any incentive to actually deviate, so price wars between algorithms would likely not be observed in practice unless they are triggered by algorithmic errors.
In summary, monitoring algorithms may facilitate illegal agreements and make collusion more efficient by avoiding unnecessary price wars, but they do not eliminate the need for explicit communication during the establishment and implementation of the cartel. Thus, this behaviour can be addressed using traditional antitrust tools.
Collusion is difficult in highly dynamic markets because continuous changes in supply and demand require frequent adjustments of prices, output and other trading conditions. Firms could avoid this barrier to collusion by using algorithms to automate their decision-making processes, so that prices react simultaneously to any changes in market conditions. Competition concerns might arise if companies start sharing the same pricing algorithm, which may be programmed to set anti-competitive prices.
While sharing pricing algorithms with rivals could violate competition rules, parallel behavior could be coordinated more subtly without explicit communication, for instance if firms outsourced the creation of algorithms to the same IT companies and programmers, creating a sort of “hub and spoke” scenario. A collusive outcome could also be achieved if companies use pricing algorithms to follow in real-time a market leader whose algorithm fixes prices above the competitive level
Tacit collusion is more difficult to achieve in certain markets, such as highly dynamic markets in which competitors have different sizes, sell differentiated products and use heterogeneous business strategies. In such markets, companies may attempt to reveal an intention to collude and coordinate more complex cooperative strategies by signaling, for instance through unilateral price announcements in the expectation that competitors will follow suit. Signaling can be risky, because if a firm increases prices to indicate an intention to collude, the signaling firm will lose sales and profits if most competitors do not receive the signal or intentionally decide not to react. This risk is a deterrent to an attempt to collude through signaling in the first place.
Algorithms could reduce or even eliminate the risk of signaling by enabling companies to set very fast iterative actions that could not be exploited by consumers but could be read by rivals’ algorithms. The OECD note provides several examples of how this process could work. For instance, firms could program snapshot price changes during the middle of the night, which wouldn’t impact transactions but could be identified as signals by rivals’ algorithms. Alternatively, companies could use algorithms to publicly disclose detailed data that is used as a code to propose and negotiate price increases. These processes were observed in the US Airline Tariff Publishing Company case, but technologically advanced algorithms could make such informal negotiation process faster and more efficient.
Finally, algorithms could achieve collusive outcomes using machine and deep learning technologies, which could potentially enable a monopoly outcome without competitors explicitly programming algorithms to do so. The OECD suggests that where market conditions are prone to collusion, algorithms learning faster than humans would be able to reach a cooperative equilibrium through high-speed trial-and-error. Such collusive outcomes may be difficult to detect, since algorithms could potentially produce the effect and substance of tacit collusion without its format, what the OECD calls “virtual collusion.”
If companies use deep learning algorithms to automatically set prices and other decision variables, collusion would become even harder to prevent using traditional antitrust tools. Deep learning algorithms could reach a collusive outcome without firms even being aware of it, raising complex questions whether antirust liability could ever be imposed.
The OECD concludes that algorithms may create incentives and mechanisms to collude that would not exist otherwise, not only in oligopolistic markets, where algorithms can concur to make collusive outcomes more likely and stable over time, but also in non-oligopolistic markets where collusion is not considered a significant risk. As more processes become automated and more transactions digitalised, algorithms can be expected to raise increasing challenges for agencies.
The OECD distinguishes between situations in which algorithms amplify conduct already covered under the current legal framework and those in which algorithms create new risks related to behaviours not covered by the current antitrust rules. In the first case, the challenges for agencies are mainly understanding how the technology works and how the algorithm can facilitate or support the main antitrust infringement. In the second scenario, the challenges for agencies are more complex. The OECD focuses on these more complex challenges.
The OECD discusses two specific legal issues raised or exacerbated by the rise of algorithmic pricing - whether the notion of agreement needs to be broadened and whether antitrust liability can be imposed where pricing decisions are made by algorithms – and then discusses some tools available to antitrust authorities.
Although the term “agreement” is generally broadly defined, the notion may still provide little guidance on more subtle forms of communication, such as signaling. It is questionable under the law of many jurisdictions if signaling can amount (and under which circumstances) to an agreement. In the absence of communication and explicit coordination, mere parallel conduct, such as simultaneous price increases by competitors, is insufficient to indicate co-ordination, since it can result from independent and rational behaviour.
In light of the role of algorithms in reaching and enforcing common policies, some have proposed revising the concept of agreement to incorporate “meetings of minds” that are reached with the assistance of algorithms. For instance, could very fast iterative change of prices converging to a common value through the use of the signalling algorithms be treated as tantamount to an agreement? Similarly, where parallel algorithms are used, a firm could make an “offer” to collude by implementing an algorithm that imitates in real-time the price of the market leader, while the leader could “accept” the offer by increasing the price in reaction to the competitor’s algorithm. Alternatively, a firm may make an “offer” to collude by publicly releasing a pricing algorithm, while competitors would “accept” the offer by using the same algorithm.
According to the OECD, a clearer definition of agreement could not only reduce uncertainty by helping businesses understand which practices are illegal, but also potentially address some of the concerns related to algorithmic collusion.
The OECD’s proposal to revisit and potentially broaden the notion of “agreement” subject to antitrust laws did not receive broad support from the individual authorities that submitted papers. The EU Commission and others noted that the EU law concept of “concerted practice” already allows some authorities to capture conduct falling short of a formal agreement. The Italian Competition Authority discussed the possible treatment of communications via algorithms as information exchanges evidencing an illegal concerted practice or as an infringement in its own right. The Italian Competition Authority also noted that its dual jurisdiction over antitrust and unfair competition cases gives them flexibility to address possible consumer harms without expanding the definition of agreement.
The other theoretical reform mooted by the OECD is a revision to the criteria for imposing antitrust liability. While it is clear that “companies can’t escape responsibility for collusion by hiding behind a computer program” (Vestager (2017)), as AI develops further, the links between the agent (the algorithm) and its principal (the human being) become weaker. The ability of algorithms to act autonomously puts in question the liability of the individuals or firms who benefit from the algorithm’s autonomous decisions. Defining a benchmark for illegality requires assessing whether any illegal action could have been anticipated or predetermined by the individuals who benefit from the algorithm, in light of the programed instructions of the algorithm, available safeguards, the reward structure and the scope of the algorithm’s activities. Could liability be imposed jointly and severally on the person who designed the algorithm, on the individual who used it and on the person (or entity) who benefitted from the decision made by the algorithm?
The OECD notes that antitrust authorities are addressing the issues raised by algorithms mainly in three existing frameworks: market studies and investigations, merger control, and remedies in antitrust investigations. Market studies can lead to recommendations for regulatory interventions, the opening of investigations and advocacy efforts and recommendations to the business community, for instance, to adopt self-regulation in the form of codes of conduct, which companies would agree to comply with when designing and using pricing algorithms. In some jurisdictions, competition authorities can use market investigations to issue non-binding recommendations or even impose structural or behavioural remedies.
The individual submissions to the roundtable indicate that antitrust authorities are actively pursuing issues relating to algorithms. For example, the Italian note describes an ongoing study on big data launched jointly by the Italian Competition Authority, the Italian Data Protection Authority and the Italian Communications Authority. In 2016, the Russian FAS requested information from resellers of electronics and household appliances on software products that optimse price-setting, as well as a developer of such software. Singapore’s note describes a 2015 study on e-commerce, including the use of algorithms and robo-sellers; Singapore is currently conducting another study on data and data analytics. The UK note refers to a number of market studies conducted by UK authorities, starting with a 2013 study on personalized pricing.
Algorithmic collusion issues may also lead authorities to revise their approach to merger control in relevant markets, for example by lowering their threshold of intervention and investigating risks of coordinated effects not only in cases of 3 to 2 mergers, but potentially also in 4 to 3 or even in 5 to 4. Such an approach would allow agencies to assess the risk of future co-ordination, going beyond the traditional duopolies where tacit collusion is more easily sustainable, to include also cases where the use of algorithms may facilitate collusion even in less concentrated industries. Agencies may also reconsider their approach to conglomerate mergers when tacit collusion can be facilitated by multimarket contacts.
Lastly, authorities could address tacit collusion risks through remedies, such as commitments limiting algorithmic practices that facilitate collusion, or introducing special compliance or monitoring programs. According to the OECD, a procedure like the “notice-and-take-down” process, under which online hosts post a notice and remove content in response to court orders, could be used to allow rapid action if anti-competitive behaviour is detected as a result of the use of algorithms. Another possible remedy could be the introduction of auditing mechanisms for algorithms, which could guarantee that algorithms are programmed in a way to steer clear of any competition concerns.
Finally, the OECD discusses possible regulatory responses to collusion and other concerns raised by algorithms. In this regard, a central question is whether some form of specific regulatory intervention in necessary in addition to existing laws on antitrust, privacy, intellectual property and fundamental human rights. The OECD identifies three possible market failures that could require regulatory intervention:
Commentators have proposed a variety of regulatory models to address these concerns. More interventionist models include self-regulation, co-regulation and state intervention. Along this range of options, many alternatives have been proposed, including information measures, principles of “search neutrality”, cybercrime regulations, data protection certification schemes, etc. Possible regulatory approaches include interventions to make algorithms more transparent and accountable for their effects and regulations to prevent machine learning algorithms from autonomously reaching tacit co-ordination, which could include price regulation, policies to make tacit collusion unstable, and rules on algorithm design.
The authorities submitting individual comments did not generally support the use of regulatory interventions to address algorithm issues. The Italian Competition Authority noted that such interventions would be “premature . . . without an in-depth understanding of this phenomenon” (para 26).
The OECD note does not discuss in detail how the concerns raised relate to specific anticompetitive practices, though the OECD focuses on collusion in pricing among competitors. While the individual notes submitted to the roundtable also focus on algorithms’ implications for price fixing, a number of other concerns arise.
Several authorities, including the EU Commission, the Russian FAS, and the UK CMA, focused on the potential use of algorithms to monitor and enforce resale price maintenance. The EU Commission noted the potential ripple effect of such resale price maintenance, if competitors’ algorithms set their prices taking into account prices that are already inflated due to resale price maintenance. The EU Commission also noted that algorithm-enabled price monitoring could be considered “rigourous” enforcement of resale price maintenance, leading to an increase of the “gravity” element of its fine calculation.
The UK CMA also noted that algorithms could facilitate abusive practices by dominant, vertically integrated platforms. The UK CMA also noted the potential use of algorithms for “personalized pricing,” i.e. price discrimination, and the possibility for consumers to be misled.
Several authorities, including the EU Commission, the Italian Competition Authority, the Competition Commission of Singapore and the UK CMA noted the potential use of algorithms for personalized pricing, involving the use of personal data, but none took a clear position that such practices would violate current antitrust rules. The UK CMA cautioned that concerns are more likely to arise in markets where there is no competition between forms or price discrimination is complex or opaque, leading consumers to lose trust in the market; these concerns, however, appear to fall more in the realm of consumer protection than antitrust.
As noted, the focus of antitrust debate over the implications of big data has recently shifted to the potential role of algorithms using big data in facilitating collusion. The OECD note provides the most authoritative and comprehensive analysis of these issues to date. The OECD has proposed a number of significant changes in antitrust enforcement, including revisiting and potentially expanding the concept of agreement subject to antitrust laws and expanding current enforcement and merger review practices to address potential anti-competitive conduct or effects in markets not currently considered to be at risk. The OECD also moots a number of regulatory initiatives that could potentially address concerns where enhanced enforcement may be insufficient.
The eight antitrust authorities that submitted papers to the roundtable share many of the concerns raised by the OECD. In general, however, these authorities did not endorse the OECD’s more far-reaching suggestions. Rather, they stressed the need for further study increasing their internal expertise. The most common concern about algorithms, from an enforcement perspective, related to resale price maintenance, and to a less extent the role of algorithms in discrimination by vertically integrated platforms.
Subscribe and stay up to date with the latest legal news, information and events . . .
© Norton Rose Fulbright LLP 2023