The year 2024 marked a pivotal moment as antitrust agencies accelerated their efforts in generative AI (see our database). AI partnerships have drawn significant attention in the space, with the CMA, DOJ, European Commission, and other agencies launching investigations and issuing their first decisions. These agencies have been addressing AI partnerships on a case-by-case basis—a pragmatic approach for now. But legal certainty is critical in such a rapidly evolving ecosystem. This is why Sandy Pentland and I developed a framework for evaluating these partnerships. We present it in our now-published paper Competition between AI foundation models: dynamics and policy recommendations (Industrial and Corporate Change, 2024).
Our central argument hinges on the concept of “increasing returns,” which we argue is key to assessing the competitiveness of AI partnerships. As we develop, the survival of AI foundation models depends on their ability to harness increasing returns—where “a marginal investment generates output above the average” (see this paper and also this one on the subject). Without leveraging these returns, a foundation model is extremely likely to be overtaken by competitors in the medium term. But importantly, not all increasing returns are alike. Some drive the quality of foundation models over time, while others do not. Here are four examples to illustrate this distinction:
- Ecosystem effects: Foundation models thrive on what we call “ecosystem effects.” More users attract more applications, which, in turn, attract more developers who want to benefit from compatibilities and access the user base. This dynamic keeps on enhancing user experience and utility.
- Access to unique data: A larger user base typically generates higher revenues and, in any case, raises the bargaining power to access proprietary databases and thus improve the model’s quality (e.g., The New York Times has a greater incentive to grant OpenAI access to be linked in ChatGPT than a newly created startup with no users). This dynamic also continue to increase the quality of the foundation model.
- Reputation effects: Widespread adoption makes it easier to secure partnerships (e.g., GPT-4’s reputation facilitated collaborations with Apple). However, reputation-based returns typically do not improve the quality of the foundation model. GPT-4 is indeed not (significantly) better simply because Apple has integrated it.
- Deployment capacity: Large technology companies can deploy models across vast ecosystems. They can reach billions of users in an instant. Yet, this capacity adds little to the model’s inherent quality. Here again, Llama-3 is not directly improved because Meta deployed it on Facebook. The learning effects following the integration are indeed too indirect and limited (see the many papers discussing the limited scaling effects of volume data, such as this one). They typically result in decreasing returns, thus do little to improve the quality of foundation models.
From these examples, it follows that deals enhancing quality-driven returns (e.g., the target can benefit from a strong “ecosystem effect” or access unique data thanks to the newly formed partnership with a large tech company) are likely to yield pro-competitive effects. Conversely, agreements providing only marginal or non-quality-enhancing returns (e.g., boosting reputation or expanding distribution channels without meaningful improvements) are more likely to have anti-competitive effects. This distinction highlights the importance of increasing returns in antitrust analysis. It also underscores why increasing returns should not all be treated equally. For further guidance on the use of increasing returns to design competition policy and evaluate the competitive or anti-competitive effects of practices, refer to the full article.
Thibault Schrepel
Citation: Thibault Schrepel, A Framework for Assessing the Competitiveness of AI Partnerships, Network Law Review, Winter 2024. |
References
- Thibault Schrepel & Alex ‘Sandy’ Pentland, Competition between AI foundation models: dynamics and policy recommendations, Industrial and Corporate Change, 2024; dtae042.
- Michael J. Mauboussin & Dan Callahan, Increasing Returns: Identifying Forms of Increasing Returns and What Drives Them (Morgan Stanley, 2024).
- Thibault Schrepel, The Evolution of Economies, Technologies, and Other Institutions: Exploring W. Brian Arthur’s Insights, Journal of Institutional Economics (Volume 20, 2024).
- Thibault Schrepel, Abdullah Yerebakan & Nikoletta Baladima, A Database of Antitrust Initiatives Targeting Generative AI, Network Law Review, Winter 2023.
- Sara Hooker, On the Limitations of Compute Thresholds as a Governance Strategy (2024)