The impact of Covid-19 on the AI race

On February 19th, the EU published its White Paper on Artificial Intelligence, which is now open to public consultation. The EU thus joined China and the US, who had published their national AI strategies in 2017 and 2019, respectively. Each of them presented a different vision on how to address the challenges posed by a technology whose relevance will surely transcend its economic impact (estimated in 13$ trillion of economic output by 2030), as it is bound to have a disruptive effect on societal organizations, labour markets and power structures.

It is no surprise that the competition for development and implementation of AI has been framed as a “race” among superpowers. Each of them is trying to gain the upper hand in terms of technological development, while also promoting a different regulatory model of AI. Actions such as President Trump’s warnings to the EU for “possibly hampering innovation” through its regulations, or China’s resort to educative programs to foster STEM talents, are indicative of an intensifying rivalry. This dispute is not bad in itself (despite some criticisms towards the very idea of framing it as a race), as a combination of competition and cooperation in certain areas could lead to a thriving environment for a global development and implementation of AI. Could the Covid-19 pandemic, however, diminish incentives for cooperation and increase the risks of a confrontational, zero-sum race?

Theoretically, the fight against the virus should be an opportunity for a more cooperative approach towards AI. Firstly, from an efficiency point of view, because the coordination of global efforts to find a vaccine makes the process faster. Secondly, because, as Gordon Brown has argued, the only way to stop the pandemic and prevent future outbreaks is to have a vaccine for the many, and not for the few. And finally, it would be in the self-interest of the states, because finding a vaccine would be beneficial for all nations, as it would allow to ease the restrictions on social life and the economy caused by the lockdown measures. For instance, 40% of China’s exports go to the 12 countries that the virus has hit the hardest, and whose demand has fallen dramatically as a result of the virus. In light of this, many people are turning to AI in the hopes of hastening the discovery of a cure against the invisible enemy.

However, as noted by Brookings and the MIT, AI will not save us from the current crisis. It might prove useful in searching through thousands of academic papers or in creating models of proteins to find a cure, but it will not provide a panacea. Instead, the value of AI lies in its potential for addressing future pandemics. AI is fuelled by data, and while we still have scarce, confusing, and variable reports from different regions and countries, we will have much better databases of the Covid-19 crisis in a couple of years. With this input, AI will be helpful in running early diagnoses and making predictions of future outbreaks, not only of this virus, but of others that are yet to come. This was not done after the SARS or the Ebola outbreaks of previous years but given the economic impact of the Covid-19 across the world, it seems plausible that prevention of future pandemics will become a relevant area of research.

While looking for a vaccine is in the interest of all actors, the situation after the development of the cure could be very different. To be sure, if a vaccine is found soon and global cooperation proves effective in containing the pandemic, the days of “science diplomacy” could return. However, there are strong reasons to believe that the competitiveness of the race will intensify, given the increased focus on anticipating future pandemics and the economic benefits that countries would obtain from being protected against them while others are not. Apps that can help in tracing and detecting cases (despite the problems that they may pose), models that are able to detect outbreaks before they spread too far or “immunity passports” will all be treasured goods. Whichever country develops them first will have both a public health and an economic advantage, as it will be able to reap the benefits of being the main exporter of the technology.

This new set of incentives bodes ill for the international environment, where the blame game for the Covid crisis has already exacerbated previous tensions. With the US retreating from its previous role as global watchdog, and Trump denying the magnitude of the crisis, China launched its own diplomatic offensive (or “poem diplomacy”) by striking deals to supply ventilators, masks and other medical equipment to countries such as Italy or Spain. However, China’s own mishandling of the outbreak in its early stages, the poor quality of most of the sent material and the doubts about the real figures of infected and dead in the country have also damaged its image on the international stage. With the rising numbers of infected and dead, all governments are entangled in a narrative battle to shift the blame towards other actors.

This context of increased tensions will likely lead to policies inspired by “AI nationalism”. Countries may embrace programs aimed at protecting national AI start-ups, rejecting foreign investment in strategic sectors, and promoting technological national champions. This nationalism would, in turn, highlight the most confrontational elements of the race, with each bloc avoiding cooperation in order to pursue economic and technological dominance over the others. The winner of that kind of race would become a new hegemon, on whose goodwill other countries would depend not only economically, but in terms of health security. This drift away from more coordinated approaches would make us more vulnerable to future outbreaks.

Moreover, increased methods of surveillance, combined with the fear to the spread of Covid-19 or any other pandemic, might reduce or completely erase ethical controls in the use of AI. This, along with an hegemon not aligned with liberal democratic values, or an increasingly multipolar world where those values are growingly contested, could prove lethal for liberal democracies as we understand them today, as well as the “human-centered” approach to AI. In the end, despite the clear benefits that a cooperative approach would have in terms of global health, the incentives pulling towards more competition may prove too difficult to resist for those countries at the top of the rankings in AI development and implementation.

That is why it is more important than ever to develop a framework of global governance for AI. It is obvious that there are many differences between the EU, the US and the Chinese models, but there are also several issues where common principles could be found. Examples include the use of drones, especially for military purposes, the stability of cyberspace, or the allocation of responsibility for the actions of an AI in order to ensure accountability, for the purpose of preventing future pandemics and preserving global health and economy, collaboration in the development of apps, passports, predictive models and vaccines could mean the difference between new prolonged lockdowns or greater protection against future outbreaks.

AI will not save us, by itself, from Covid-19, but it may help us in fighting future diseases. It can also be a transformative force for good, by enhancing the quality of life of people all around the world. However, we cannot let nationalism and blame games divert us from a series of global principles that would help in regulating this technology and ensuring that the efforts to develop it combine competition with cooperation. Renouncing these principles and entering a more conflictual world, where AI is used to ensure economic and technological dominance over other nations, would only provide the grounds for a less safe future, in every sense.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related posts

Latest posts

PromethEUs Publication: “Artificial Intelligence: opportunities, risks and regulations”

On Tuesday 14 November, at the I-Com premises in Brussels, the network presented its joint paper on the EU Artificial Intelligence Act...