Dear readers, the Network Law Review is delighted to present you with this month’s guest article by Drew Fudenberg, Professor of Economics at MIT, and David K. Levine, Professor of Economics at the European University Institute.
****
While driving a car, you hear a large bang, the car no longer accelerates properly, and the engine makes loud noises. You are not an automobile mechanic and do not know how engines work, but you are aware of some basic facts, such as that breaking parts of the engine with a hammer will probably make things worse. Should you get out of the car and try to fix it, or simply struggle through? This is a prototypical problem of intervention in a complex system.
Alternatively, these problems are sometimes posed as moral dilemmas: should someone not trained in medicine give aid to an injured person? Should an ignorant bystander flip a switch to cause a trolley to kill a single person rather than crash into a mass of people?
Similar issues arise in economic contexts. There was a huge debate about how much policymakers should intervene in response to the pandemic. Should rich countries intervene to help a developing country such as Sri Lanka that faced an economic crisis in 2021? How should a business respond when faced with unexpected systems failures as happened to Southwest Airlines between December 19 and December 28, 2022, when it canceled thousands of flights, leaving tens of thousands of people stranded for days? How should it respond to a competitor with a compelling new product such as Facebook faced with TikTok or Google faced with ChatGPT?
In the context of providing aid to others, it is common to argue that intervention should depend upon expertise. In a health emergency on an airplane, a doctor is expected to intervene but an ordinary passenger is not. By contrast, if an elderly person drops a bag of groceries even an ordinary passer-by might be expected to help. Our paper “Sins of Omission and Commission in Complex Systems1D. Fudenberg and D.K. Levine, ‘Sins of Omission and Commission in Complex Systems’, International Journal of Game Theory, forthcoming, 2021” examines this problem from a decision-theoretic perspective.
Our focus is on a decision maker with well-defined objectives: an airplane passenger who cares about the health of their fellow passengers, a policy-making body that cares about the welfare of developing nations, or a CEO who cares about the financial success of their firm. We argue that the best response to a shock when information is limited depends on how complex the system is and how bad the circumstances are: If the system is very complex and circumstances not too dire, intervention is more likely to make things worse than better, and the optimal intervention is negligible. In dire circumstances, this logic is reversed, even in a complex system.
To understand this result and the mathematical setting we imagine that the system is governed by a set of controls. A simple system has only a few controls, while a complex one has many. For example, the driver of a car controls the gas pedal, brake, and steering wheel. In a sharp right turn, the steering wheel must be turned to the right, but this must be coordinated with the gas pedal and brake; Too little gas and we fall short of making the turn, but too much gas and we shoot off the road. The difference between Max Verstappen and us is that he knows exactly how to coordinate the gas pedal with the steering wheel. If there are many controls and we only have a general idea of how to adjust them, a large adjustment is likely to cause us to overshoot and make a mess, while a smaller adjustment is safer and may at least result in a mild improvement. Hence, in the face of ignorance, interventions in a complex and poorly understood system should be small.
There is a lot of common sense in this mathematical result and it highlights the crucial elements for making a decision. Is the system complex? Is the course of action obvious? Is the situation dire? Bad decisions in practice often arise from underestimating the complexity of the system. For example, one of us has been involved in the debate over pharmaceutical patents.2Michele Boldrin and David K. Levine Against Intellectual Monopoly (2008) Cambridge University Press. 3. We argue that, in general, it is a bad idea, but we have never recommended simply eliminating pharmaceutical patents. The reason for this is that the pharmaceutical industry is subject to a variety of complex government regulations – including requirements for testing the safety and efficacy of drugs, as well as market exclusivity beyond those granted by patents – which makes it a very complex system indeed. For this reason, we worry that a simplistic plan such as abolishing patents would be likely to have catastrophic effects. Rather, the interactions of the entire system must be understood so that different elements of the policy are changed in a coordinated way, and changes must be small. For example, phasing out patents over a long period of time, rather than abruptly, so that the effects can be understood and the policy adjusted.
While it seems obvious that major interventions in previously well-functioning systems are a bad idea, such interventions are being undertaken. What happened in Sri Lanka in 2021 illustrates the point.3See for example Nordhaus and Shah “In Sri Lanka, Organic Farming Went Catastrophically Wrong” (2022) Foreign Policy. Before that, Sri Lanka had a thriving and economically crucial agricultural sector that provided food for Sri Lankans and money from exports. However, Sri Lankan agriculture relied heavily on environmentally unfriendly chemical fertilizers. Several environmental advocates got the ear of President Gotabaya Rajapaksa and convinced him that this was a far greater problem than previously thought. In April 2021, he banned their use. Despite assurances from these “experts” that there would be no problem using organic fertilizers, this proved not to be the case, and food production dropped catastrophically. Here, a large intervention in a complex and reasonably well-working system by a politician with little understanding of how it worked became an astounding blunder. Notice that the policy had a legitimate goal: chemical fertilizers have bad environmental effects. A small adjustment, phasing out chemical fertilizers over a long period of time or mildly reducing imports would have moved the system in the right direction without the danger of overshooting the target and causing an economic collapse.
Even in a system that is functioning poorly, intervention may not be a good idea if the system is complex and poorly understood. Firms that do not respond to competition from complicated new products are sometimes criticized for doing nothing. However, if a firm lacks the technical expertise to build complicated new products of its own, and is limited to adjustments in a single dimension, such as price, our results show that it may indeed be best to have little or no response. The responses of WordPerfect to the Windows 3.0 shock in 1990 and of Nokia and Blackberry to the iPhone shock in 2007 all fit into this category. In all three cases, firms did little to respond to the shock, and subsequent events showed that all indeed lacked the technical expertise to build competitive products. Ultimately those companies failed, so they may well have been doomed anyway. If so, they made the most of the time they had left to them.
The response of Google to ChatGPT provides a counterpoint. Google’s core business is selling advertising connected to its search engine. The introduction of ChatGPT with the ability to provide detailed responses to questions and not just lists of sometimes irrelevant web pages provides a potential alternative and superior search engine. Indeed, Microsoft is working on incorporating this into its own Bing search engine. Google has been developing its own artificially intelligent agent with capabilities similar to ChatGPT but has not tried to incorporate it into its core product. This makes sense: do not intervene in a complex product that is working well. Faced with the phenomenal success of ChatGPT, the situation has changed. Google saw this as an existential crisis for its core search business and called for “all hands on deck” to respond with its own product “Bard.” On February 8 2023 Google gave a demonstration of this new technology.4This event and the consequent stock drop was widely reported in the news media. In response to a question, Google claimed that the James Webb telescope discovered a planet that had actually been known for over a decade before the telescope was even launched. As a result of Google’s apparent inability to integrate the new technology with its core business, Google’s stock immediately plunged by 9%, seemingly a mistake. Of course, unlike in our model, Google will have the chance to make multiple adjustments to the introduction of ChatGPT and not just one, so it is far too early to tell how this saga will play out: Will Google gradually fade away or will it successfully remake itself as IBM did with the advent of the PC?
Our results are also relevant to industry regulators. Industries themselves are complex systems. The first and most obvious observation is that it is easier to regulate an established industry where market conditions and the nature of competition are well understood. For example, the EU has had relative success in regulating the cell phone industry by establishing standards, such as GSM, and abolishing roaming costs within the union. Newer and less well-understood industries are harder to regulate, as evidenced by the EU’s lesser success in regulating the internet. Interventions, in general, have been small, such as the requirement that users authorize “cookies.” Although there have been calls for much more dramatic regulation of the giants such as Facebook, Google, and Amazon,5See, for example, Radical Markets: Uprooting Capitalism and Democracy for a Just Society by Eric Posner and Glen Weyl (2018), Princeton University Press. until recently the EU competition authorities have quite rightly been conservative in tinkering with the system, which after all functions reasonably well. To give an indication of the degree of ignorance of those proposing radical overhaul, none of those critics seem to have anticipated that technological advances in AI would threaten existing ways of doing business and reduce the monopoly power of the giants. It remains to be seen if the EU will engage in larger interventions, such as charging content providers, and if so whether this is an overreach.
We want to conclude by emphasizing something that is not covered by our model: Sometimes the best course is to gather more information and then make a large change. One example is the failure of NASDAQ computer systems during the Facebook IPO. Rather than studying the system to understand the reason for the failure, programmers were simply instructed to make a large intervention in a single direction, namely to remove a validation check that had caused the system to shut down. The consequences were catastrophic. There was a cascading series of failures and traders blamed Nasdaq for hundreds of millions of dollars of losses, and the mistake exposed the exchange to litigation, fines, and reputational costs. Quite possibly it would have been better to shut down the system and figure out what the problem was.
Drew Fudenberg
David K. Levine
***
Citation: Drew Fudenberg and David K. Levine, Adjusting to Change in Complex Systems, Network Law Review, Winter 2023. |