Concerns over competitors using AI pricing tools to fix prices have dominated antitrust discussions in the U.S. and EU. Recent cases show how algorithmic pricing might enable unlawful coordination.
But AI antitrust risks far exceed this current focus, and will likely expand into other concerted practices and unilateral conduct.
In the U.S., no AI pricing case has made more headlines than U.S. v. RealPage, in which the Department of Justice (DOJ) alleged that multifamily landlords shared competitively sensitive, nonpublic data with RealPage’s platform, which then generated rent recommendations. To resolve the case, the DOJ recently submitted a consent decree under which RealPage commits to extraordinary conduct remedies including prohibitions on using nonpublic information, training the model with data less than a year old, generating recommendations on a geographic market narrower than a state, or limiting users’ price decreases. In a parallel class action, landlords have already paid over $141.8 million in settlements, with the litigation continuing against many defendants.
In Gibson v. Cendyn Group, plaintiffs challenged the use of Cendyn’s hotel revenue management software, but the court dismissed the case because the pricing tool largely relied on public data and did not obligate hotels to adopt its recommendations. The court found no actionable “agreement” because competitors neither shared confidential information nor uniformly followed the tool’s outputs.
The European Commission’s Horizontal Guidelines confirm that if competitors use the same pricing algorithm — for example, one that automatically sets prices to always match a rival’s price minus 5% — this will probably breach Article 101 of the Treaty on the Functioning of the European Union (TFEU), even if the companies never directly agree with each other. However, the Commission has not issued an enforcement decision finding that self-learning AI systems colluding on prices constitute a prohibited concerted practice.
While the price-fixing issues in RealPage and Gibson have dominated the landscape, the rapid integration of AI into business strategies across industries is creating new and evolving antitrust risks.
About the Authors

AI Market or Customer Allocation
When competitors use AI to target select customers or geographic markets, the outcome may be tantamount to market or customer allocation, even without direct communications among competitors. AI programs could read signals from each other in market movements, and eventually maximize profitability with an implied agreement to allocate markets or customers. Reliance on the same third-party AI program that uses each company’s confidential information could increase this risk.
AI tools that predict demand or market density using sensitive competitive data pose a risk. If multiple competitors use the same system, they may each choose to avoid regions with high rival concentration. This behavior could resemble geographic market allocation.
Customer allocation is another risk. AI tools designed to optimize marketing campaigns may learn that head-to-head solicitation of the same customer segment reduces profits. Using confidential data on marketing success, a model could steer different firms toward distinct pools of customers after detecting that non-overlapping efforts are most effective. If broadly adopted, the result could be functionally identical to a non-solicitation agreement, reached through reliance on a shared algorithm.
About the Authors

Robert Klotz, a partner at Steptoe in Brussels, advises a wide range of clients in the field of EU and German antitrust and competition law. He previously served as an official in DG Competition of the European Commission.
AI Price Discrimination
AI tools may allow companies to set personalized prices for different consumer groups based on data about individual characteristics or behaviors. While such pricing can improve efficiency by tailoring offers to consumers, it raises legal risks. Under EU law, it could breach Article 102 of the TFEU or the Digital Markets Act if it lacks objective justification or transparency. In the U.S., similar practices may be viewed as an unfair method of competition.
AI-driven personalized pricing could offer higher prices to less price-sensitive users, with little or no opportunity for consumers to seek alternative offers. This may shift value from consumers to companies, redistributing benefits in the market. Furthermore, AI’s ability to segment consumers and capture their maximum willingness to pay could renew scrutiny of exploitative pricing practices in digital markets.
In FTC v. Amazon, the FTC alleged that Amazon deployed an algorithm designed to raise prices both on and off its platform by automatically incrementally raising prices until competitors did not follow, and that this conduct violates FTC Act Section 5 as an unfair method of competition. The case may define how U.S. law treats algorithmic conduct that manipulates market outcomes without collusion.
AI Predatory Pricing
AI can quickly analyze market data, predict competitor reactions, and identify and attract marginal customers most likely to switch providers all while maintaining profitability on inframarginal customers who are less likely to switch. This use of AI may result in a new type of predatory pricing.
For instance, a dominant firm’s AI tool instructed to maximize long-term profit by finding each individual customer’s price may start charging below-cost prices to the customers competitors would most likely target, driving competitors from the market. Firms using this strategy could recoup their losses once the competition has exited the market. Such predatory pricing may be unlawful in the U.S. and EU.
Under traditional U.S. antitrust doctrine, predatory pricing requires a firm to set prices below an appropriate measure of its own cost and to have a dangerous probability of recouping its losses once it drives competitors out of the market. Here, the AI tool could accomplish the same result by only targeting certain customers for predatory prices, even if the average price across all customers is above-cost. While this would be a novel application of the predatory pricing laws, the concepts may fit into traditional rubrics.
AI-enabled pricing discrimination may also raise issues under the Robinson Patman Act, which in certain circumstances prohibits a seller from charging different prices for the same product to competing resellers. Antitrust review is important before employing any AI tools that would discriminate prices or other terms among customers.
AI Data Sets Leading to Monopolization
AI requires large data sets; the larger, more diverse an AI’s training data set, the more powerful it is. Companies with abundant data sets can create structural barriers for competitors who lack comparable resources, eventually leading to dominance or monopolization of certain AI markets.
In digital markets, network effects and data control can significantly reinforce a firm’s dominance. Network effects refer to a phenomenon in which the value of a product or service grows as more people or entities use it. This creates a self-reinforcing cycle where a large user base attracts even more users, raising barriers to entry for competitors.
Network effects played a crucial role in reinforcing Google’s alleged dominance in cases in the U.S. and the EU. In Google Search (Shopping), the European Commission found that the more users relied on Google’s search engine, the more data it collected, allowing it to improve the quality and relevance of results, attracting even more users in a self-reinforcing cycle. In Google Android, network effects operated across multiple sides of the mobile ecosystem: more users encouraged more app developers, which in turn attracted more users and device manufacturers.
Another example is the Amazon Marketplace case, in which the European Commission found that Amazon used non-public data from independent sellers to inform its retail decisions and self-preference its offers and logistics services. The Commission concluded that this conduct strengthened Amazon’s data advantage and distorted competition by disadvantaging rival sellers. This case illustrates how control over extensive, proprietary datasets can entrench dominance and create exclusionary conditions in digital markets.
Similarly, control over extensive data allows dominant firms to refine algorithms and personalize services, making it difficult for rivals to compete and potentially leading to monopoly power, even before consumer harm is observable.
As more companies adopt AI tools, compliance teams should stay abreast of how the technology evolves and how its expanding role in operations creates potential for new antitrust risks and, as a result, the need to adapt legal tools to address novel forms of market abuse. Compliance teams should evaluate each new AI tool for antitrust risk.



No comments yet