The Network Law Review is pleased to present a symposium entitled “Dynamics of Generative AI,” where lawyers, economists, computer scientists, and social scientists gather their knowledge around a central question: what will define the future of AI ecosystems? To bring all this expertise together, a conference co-hosted by the Weizenbaum Institute and the Amsterdam Law & Technology Institute will be held on March 22, 2024. Be sure to register in order to receive the recording.
This contribution is signed by Axel Voss, Member of the European Parliament and Rapporteur on the AI Act. The entire symposium is edited by Thibault Schrepel (Vrije Universiteit Amsterdam) and Volker Stocker (Weizenbaum Institute).
***
1. Generative AI: Regulatory challenges in a new technological era
The age of digitalization is currently reaching new heights with the enormous progress made in the field of generative AI. Since the release of the popular chatbot ChatGPT at the end of 2022, the topic seems to be omnipresent and causes discussions in various areas of life. Particularly in light of the astonishing capabilities of generative AI, the whole world is currently in the process of developing a suitable legal framework for this set of technologies through various approaches.
About the legislative process in Europe, we have thankfully reached an important milestone with the adoption of the planned AI Act.1EU Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM (2021) 206 final. However, despite the progress made in the legislative negotiations between the European Institutions, there is no satisfying answer to the extreme challenges, also raised by European business leaders, especially about the EU’s competitiveness in digital issues, particularly in regulating generative AI models like ChatGPT. The legislation is expected to be adopted in April 2024, followed by a transitional period (6 – 36 months depending on specified articles) before full enforcement.
In designing this law, on the one hand, we must ensure to create a human-centred, trustworthy approach to AI based on our fundamental rights and European values, so that we can address all those risks that could jeopardise our freedom and security, such as deepfakes, copyright issues and other malicious uses of this technology. On the other hand, we must also guarantee that the digital transformation takes place in Europe. Technological change – especially achievable through the increased use of generative AI – can bring huge opportunities for the benefit of society, with the potential to significantly improve the competitiveness of the EU industry, increase productivity and accelerate innovation. Achieving productivity, innovation as well as competitiveness at the same time is a major challenge. Unfortunately, however, there is a tendency within the current social and political debate to focus predominantly on regulating the risks of AI instead of enabling the opportunities of AI.
To prevent us from being left behind in the global competition, we must now urgently concentrate on developing future-proof global standards in line with our European ideas. Particularly in the digital context, legislators should even see it as their responsibility to design the law in a way that enables innovation and tailor their legal instruments to the specificities of digitalization.2Wolfgang Hoffmann-Riem, Die digitale Transformation als rechtliche Herausforderung, Juristische Schulung (JuS) 2023, 617, 619.
2. Measures to promote innovation and competition.
To achieve the goal of promoting competition and innovation in the field of generative AI, we could (and should) take various effective measures as different individual elements of a coherent strategy. Some important of these elements are presented below.
2.1. Regulatory sandboxes
Firstly, as part of a regulatory strategy promoting generative AI, legislators introducing – aside from the “real-world-testing” – so called “regulatory sandboxes” as they could have a promising impact on innovation and competition.3This instrument was already included in Art. 53 of the EU Commission’s proposal for the AI Act as a measure to promote innovation, see COM (2021) 206 final, 69. At the time of writing this article, it is not yet clear whether and in what form this instrument will find its way into the final version of the regulation (especially with regard to the uncertainty as to whether generative AI will be covered by the regulation). The idea originates from the IT sector and, apart from that, finds so far mainly use in the FinTech sector. As a method of “Experimental Lawmaking”,4In more detail on this term, see Michiel A. Heldeweg, Experimental legislation concerning technological & governance innovation – an analytical approach (2015), The Theory and Practice of Legislation, 3:2, 169-193, DOI: 10.1080/20508840.2015.1083242. its main purpose is to test new products and services in a specially designed environment.5See Ranchordas, Sofia, Experimental lawmaking in the EU: Regulatory Sandboxes (October 22, 2021). EU Law Live [Weekend Edition, 22 October 2021], University of Groningen Faculty of Law Research Paper No. 12/2021, p. 3, available at SSRN https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3963810.
An exemplary provision could encourage national competent authorities to set up “regulatory sandboxes”, which establish a controlled environment to test innovative technologies for a limited time based on a testing plan under the supervision of the competent authorities.6COM (2021) 206 final, 15. The difficulty of such an instrument can certainly be seen in the fact that, on the one hand, the hoped-for positive effects on competition and innovation can only be achieved by effectively relaxing the law and, on the other hand, certain minimum regulatory requirements must always be met. Against this background, the question of what liability rules should apply to a sandbox is particularly crucial, as it would compromise the whole sandbox-principle if companies were reluctant to disclose their algorithms and trade secrets without a legal waiver.7OECD (2023), “Regulatory sandboxes in artificial intelligence”, OECD Digital Economy Papers, No. 356, p. 23, OECD Publishing, Paris, https://doi.org/10.1787/8f80a0e6-en. Legislators would therefore certainly have to fine-tune the details of such an instrument and develop objective and clear rules. However, the introduction of “regulatory sandboxes” would be a welcome approach to create legal scope for ideas, experiments and economic exploitation opportunities so that developers can examine their consequences and correct undesirable developments, if necessary.8Wolfgang Hoffmann-Riem, Die digitale Transformation als rechtliche Herausforderung, Juristische Schulung (JuS) 2023, 617, 619.
2.2. Standardization measures
Furthermore, another valuable measure in terms of creating good conditions for competition and innovation in the generative AI sector is certainly the standardization of regulatory requirements.
This is interesting concerning the previous aspect: in connection with the use of sandboxes, it represents an effective scenario if the results of AI applications tested in the sandboxes can be assessed according to uniform standards as to whether they can be offered on the market or not.9OECD (2023), “Regulatory sandboxes in artificial intelligence”, OECD Digital Economy Papers, No. 356, p. 24, OECD Publishing, Paris, https://doi.org/10.1787/8f80a0e6-en.
Aside from that, standards can also help developers of AI models, in general, to fulfil legal requirements before they enter the market with their products. If legislators ensure a clear and standardized interpretation of their legislation – if possible even on a global level – the digital sector can benefit enormously. On the one hand, international standardization organisations can make an indispensable contribution by developing such standards, but also the Commission should play a part by providing timely guidance concerning the application of EU legislation.
In any case, the EU should take the general approach to using international standards as a basis to avoid fragmentation of technological standards.
2.3. Data access
Another very important element in fostering competition and innovation, particularly about the development and application of generative AI, is improving access to data. Data is the basis of AI models and the potential of these models depends crucially on how much data is available, especially for training models. It is therefore essential to provide as much data as possible for these AI models.
Since this topic is usually touched by questions of data protection and therefore data protection law often restricts the flow of data, the use of synthetic data10See about this European Commission, Joint Research Centre, Hradec, J., Craglia, M., Di Leo, M. et al., Multipurpose synthetic population for policy applications, Publications Office of the European Union, 2022, https://data.europa.eu/doi/10.2760/50072 – meaning artificial data that is generated from original data – should be considered as part of a solution to this problem. In addition, the possibilities of pseudonymization and anonymization should be utilized in situations relevant to data protection law.
Furthermore, the use of data trustees is an interesting option for expanding the data flow in compliance with data protection regulations. This is about the possibility of empowering data intermediaries to act in the interests of the data holder and/or the data users as part of a fiduciary duty.11Heiko Richter, Looking at the Data Governance Act and Beyond: How to Better Integrate Data Intermediaries in the Market Order for Data Sharing, Gewerblicher Rechtsschutz und Urheberrecht, Internationaler Teil (GRUR Int.) 2023, 458, 460. Moreover, it is important to further improve the interoperability of data and establish common standards to facilitate the flow of data and improve the sharing of data across countries and sectors.
2.4. Reduction of bureaucracy
Moreover, future regulations should no longer be associated with so much bureaucratic burden, especially for SMEs and start-ups.12As already envisaged in the Commission’s proposal for the AI Act, see COM (2021) 206 final, 15. Due to rapid technological development, digital legislation should always be flexible, technology-neutral and above all one thing: easy and quick to implement.
To achieve this, new regulations should ideally not end up in bureaucracy. Where bureaucratic effects are unavoidable, state administrations should offer support to companies to deal with the given requirements. For instance, they should set up help desks. In addition, legislators should strive for reforms to reduce existing bureaucracy, for example, with regard to the high bureaucratic requirements that the General Data Protection Regulation imposes on companies.
Thus, this involves an uneven distribution of compliance burdens. The Act’s use case approach, which imposes different burdens depending on where AI is deployed, is criticized for lacking neutrality. The suggestion made by scholars, is to combine this approach with a more technical one. Specifically, only AI systems used in high-risk sectors and that are nondeterministic should face the highest compliance requirements. Meanwhile, AI systems in high-risk sectors but with highly predictable outputs should have lower compliance requirements due to their lower actual risk. This proposal aims to balance the regulation’s impact and ensure fairness in compliance obligations.
2.5. Harmonization of supervision
Finally, all countries should increasingly work together on the important issue of regulating generative AI. If, for example, a company could also use the required authorization granted by the authority in one country also in other countries, this would reduce bureaucracy and speed up processes, thereby promoting innovation and fuelling competition.
These ideal conditions would involve the EU setting up a single supervisory authority, staffed in particular with AI experts and responsible for supervising the fulfilment of regulatory requirements in all its Member States.
Removing as many market barriers as possible and promoting international cooperation – at the European level and possibly even beyond – is another important key to technological progress.
As an additional aspect, a fully harmonised and therefore central interpretation of the existing rules can avoid legal uncertainty or differences in a single market.
3. Conclusion
Promoting innovation and competition in the field of generative AI is currently one of the most important legislative tasks, as the development of this new technology represents an extremely future-oriented and profitable market. Europe should be a key player in this market to enable progress and prosperity for its people. Regarding innovation, there exists an immense number of resources for advancement, yet these remain largely untapped. We suggest that the AI Act could have been better calibrated to foster innovation. This is particularly crucial for our survival in the digital age and to further enhance competitiveness. Unfortunately, bureaucracy still inhibits progress, and there is an undue focus on threats rather than opportunities in our European Union. Furthermore, the journey towards the AI Act has revealed divergent perspectives among EU member states, with for instance. France standing out as a more supportive advocate while Germany exhibits a more reluctant stance, grappling with the potential consequences of stringent regulations. The nuanced dynamics reveal a desire for a more assertive policy, reflecting a sentiment that the current framework may lack the necessary teeth to adequately address the challenges posed by advanced AI technologies. This divergence underscores the complexities inherent in crafting a unified approach within the EU. While a cautious approach is understandable, critics argue that a less aggressive policy may disadvantage the EU market in the global AI landscape. Some stakeholders contend that the intricacies of the AI framework, albeit comprehensive, might pose challenges for market participants, potentially hindering innovation and competitiveness within the region.
The type and manner of our chosen approach to the regulation of AI will ultimately have a significant impact on Europe’s attractiveness as a location from the perspective of developers of generative AI. At the same time, we urgently need rules to ensure the safety of systems and applications developed in the field of generative AI. It is certainly a difficult challenge to fulfil both requirements at the same time. Nevertheless, there are several instruments available, which could serve to realize both goals simultaneously. Besides that, any regulation should always take a risk-based approach that differentiates between a minority of “high-risk” and the majority of “low-risk” AI use cases. While the prevailing discourse predominantly focuses on regulating AI risks, it is crucial to strike a balance by acknowledging the diverse opportunities AI presents. This approach recognizes that not all AI applications pose an equal level of risk. Tailoring regulatory measures to address specific risks associated with high-risk AI applications is key. Simultaneously, fostering an environment conducive to the responsible use of “low risk” AI technologies is crucial for positive contributions to innovation and societal advancement. The main goal is to mitigate potential adversities while facilitating the development and deployment of AI applications with a positive impact.
To put everything together, future regulations that we create should above all offer room for innovation, increase the flow of data and aim for uniformity, clarity and the highest possible level of harmonisation. The outcome of the recent trialogue has reflected progress in crafting a regulatory framework for AI in Europe. As demonstrated, effective measures often require a comprehensive approach, necessitating the integration of various regulatory instruments. Therefore, the development of balanced regulations is contingent on adopting a holistic perspective and creating an intelligent overall concept that interlinks different instruments.
Citation: Axel Voss, What New Legal Rules Could Foster Competition and Innovation Dynamics In The Generative AI Ecosystem?, Dynamics of Generative AI (ed. Thibault Schrepel & Volker Stocker), Network Law Review, Winter 2023.