This short article serves as an introduction to the working paper by Thibault Schrepel and Jason Pott entitled “Measuring the Openness of AI Foundation Models: Competition and Policy Implications”
***
Antitrust agencies are showing a strong interest in AI foundation models and Generative AI (“GenAI”) applications. They want to ensure that the AI ecosystem remains competitive. Given that AI foundation models could become a key infrastructure for tomorrow’s economy, one can only welcome the interest of antitrust agencies in this area.
To date, research has focused on the competitive impact of economic variables such as the cost of compute (i.e., training and running AI foundation models) and talent. Partnerships between AI companies also raise antitrust concerns. Agencies are currently investigating these issues. But there is one key factor driving competition in GenAI that is missing from the (antitrust) discussion: AI foundation model licenses.
AI foundation model licenses – consisting of terms of service and related documentation – form the constitutional layer of innovation (and thus competition) in generative AI. There are two main reasons why AI licenses play such an important role. First, these licenses dictate the flow of knowledge and information (code, weights, training data, etc.) that enters the innovation commons. In other words, AI licenses influence the development of new foundation models, i.e., horizontal competition. Second, AI licenses act as a bottleneck by regulating who can access foundation models, and under what conditions (e.g., APIs, derivative works…). This means that AI licenses also affect vertical competition and ecosystem dynamics.
In general, the more open the licenses of AI foundation models are, the more they stimulate outside-the-organization innovation. Moreover, because open models can be audited and forked while preventing opportunistic behavior (e.g., leveraging practices), they inherently address most antitrust concerns. Given the importance of the openness of AI foundation models to competitive dynamics, and the lack of research on the topic, Jason Potts and I decided to join forces and devote a full paper to the topic.
We find that current methodologies for measuring the openness of foundation models often fail to account for legal, economic, and social dynamics. These methodologies focus solely on technical aspects, overlooking important elements that make a license truly open. In response, we present a new, comprehensive methodology for measuring the openness of AI foundation models. We then apply it to the most common AI foundation models (including OpenAI’s GPT-4, Meta’s Llama 3, Google’s Gemini, Mistral’s 8x7B, and MidJourney’s V6), which we rank on an openness spectrum. Finally, we derive concrete policy implications from our findings, some for policymakers and some for regulators (including antitrust agencies).