The steam engine changed the world. Artificial intelligence can destroy them.

Industrialization meant the widespread adoption of steam power. Steam power is a general purpose technology – it powers factory equipment, trains, and agricultural machinery. The economies that adopted steam power left behind—and conquered—those that did not.

Artificial intelligence is the next important general-purpose technology. 2018 Report The McKinsey Global Institute predicts that artificial intelligence could generate $13 trillion in additional global economic activity by 2030 and that countries that lead the development of artificial intelligence will reap a significant portion of these economic benefits.

Artificial intelligence also enhances military strength. AI is increasingly being applied in situations where speed is required (such as short-range ballistics defense) and in environments where human control is logistically inconvenient or impossible (such as underwater or in signal congested areas).

Moreover, countries that lead the development of artificial intelligence will be able to exercise power by setting rules and standards. China already Export AI monitoring systems around the world. If Western countries cannot offer an alternative system that protects human rights, many countries may follow the example of techno-authoritarian China.

History suggests that as the strategic importance of a technology grows, countries are more likely to exercise control over that technology. The British government provided funding for the development of early steam engines and provided other forms of support for the development of steam power, such as patent protection and tariffs on imported steam engines.

Similarly, in fiscal year 2021, the United States government spent $10.8 billion on AI research and development, $9.3 billion of which came from the Department of Defense. Chinese public spending on AI is less transparent, analysts say appreciation It is roughly comparable. The United States has also tried to restrict Chinese access to specialized computer chips Important To develop and deploy artificial intelligence, while securing our supply using Chips and the law of science. Research centersAnd advisory committeesAnd Policy He constantly urged American leaders to keep up with China’s AI capabilities.

So far, the AI ​​revolution has matched the pattern of previous general-purpose technologies. But the historical analogy breaks down when we think about the risks posed by artificial intelligence. This technology is much more powerful than the steam engine, and the risks it poses are much greater.

The first danger comes from an accident, miscalculation or malfunction. On September 26, 1983, a satellite early warning system near Moscow reported that five US nuclear missiles were heading toward the Soviet Union. Fortunately, Soviet Lieutenant Colonel Stanislav Petrov decided to wait for confirmation from other warning systems. Only Petrov’s good judgment prevented him from passing the warning up the chain of command. If he had, the Soviet Union might have launched a retaliatory strike, provoking a full-scale nuclear war.

In the near future, countries may feel forced to rely entirely on AI decision-making because of the speed advantage it provides. The AI ​​may make grossly incorrect calculations that a human would not, leading to an accident or escalation. Even if the AI ​​behaves roughly as planned, the speed at which autonomous systems can engage in combat could lead to rapid escalation cycles, along the lines of “Flash crashesBecause of the high speed trading algorithms.

Even if it is not integrated into weapon systems, poorly designed AI can be extremely dangerous. The methods we use to develop AI today — essentially rewarding AI for what we perceive to be correct output — often produce AI systems that do what we tell them to do But not what we wanted them to do. For example, when researchers He sought to teach a simulated robotic arm to stack Lego bricks, and they rewarded it for lifting the underside of a brick higher than the roof—and turning the bricks upside down instead of stacking them.

For many tasks, it might be given a futuristic AI system, and it might be useful to pool resources (computational power, for example) and prevent itself from being turned off (by hiding its intentions and actions from humans, for example). So, if we develop a strong AI using the methods most common today, it might not do what we were created to do, and it might hide its true goals until it realizes it doesn’t have to — in other words, so it can outsmart us. An AI system like this would not need a physical body to do so. She can recruit human allies or operate robots and other military equipment. The stronger the AI ​​system, the more anxious this hypothetical situation is. And competition between countries may increase the likelihood of accidents, if competitive pressures lead countries to devote more resources to developing strong AI systems at the expense of ensuring that those systems are safe.

The second risk is that the competition for supremacy over AI could increase the chance of conflict between the United States and China. For example, if it appears that one country is about to develop a particularly powerful AI, another country (or coalition of countries) might launch a pre-emptive attack. Or imagine what would happen if, for example, advances in marine sensing, enabled in part by artificial intelligence, reduced the deterrent effect of submarine-launched nuclear missiles by making them detectable.

Third, it will be difficult to prevent AI capabilities from spreading once they are developed. The development of artificial intelligence is currently much more open than the development of strategically important 20th century technologies such as nuclear weapons and radar. The latest results are published online and presented at conferences. Even if AI research becomes more classified, it could be stolen. While developers and early adopters may get some first-mover advantage, no technology — even top-secret military technologies like a nuclear bomb — has ever been kept exclusive.

Rather than calling for an end to competition between states, it is more practical to identify pragmatic steps the United States can take to reduce the risks of competition for AI and encourage China (and others) to do the same. Such steps exist.

The United States should start with its own systems. Independent agencies must regularly assess the risks of accidents, malfunctions, theft, or vandalism from AI developed in the public sector. The private sector should be required to carry out similar assessments. We do not yet know how to assess the riskiness of AI systems – more resources must be allocated to addressing this difficult technical problem. At the margins, these efforts will be at the expense of efforts to improve capabilities. But investing in safety would improve security in the United States, even if it delays the development and deployment of artificial intelligence.

Next, the United States should encourage China (and others) to make their systems secure. The United States and the Soviet Union agreed to several nuclear arms control agreements during the Cold War. Similar steps are now needed for AI. The United States should propose a legally binding agreement banning the use of autonomous nuclear weapons launch control and explore “softer” weapons control measures, including voluntary technical standards, to prevent accidental escalation of autonomous weapons.

President Obama’s Nuclear Security Summits in 2010, 2012, 2014, and 2016 attended the United States, Russia, and China and led to significant progress in securing nuclear weapons and materials. Now the US and China should cooperate on AI safety and security, for example by pursuing joint AI safety research projects and promoting transparency in AI safety and security research. In the future, the United States and China may jointly watch for signs of computationally intensive projects, to spot unauthorized attempts to build powerful AI systems, as the International Atomic Energy Agency does with nuclear materials to prevent nuclear proliferation.

The world is on the verge of a dramatic transformation like the industrial revolution. This shift would pose enormous risks. During the Cold War, the leaders of the United States and the Soviet Union realized that nuclear weapons tied the fate of their countries. Another such link is being established in the offices of technology companies and defense laboratories around the world.

Will Henshall is pursuing a master’s degree in public policy at Harvard’s Kennedy School of Government.

Leave a Comment