expr:class='"loading" + data:blog.mobileClass'>

Friday, 10 May 2019

Nuclear Deterrence: From Bomb A To Bomb IA

Autonomous atomic submarines, algorithms detecting an atomic threat or robots capable of guiding high-speed missiles: artificial intelligence can change the atomic power ratio, reveals a report published in May.

On June 3, 1980, at 2:26 am, Zbigniew Brzezinski, National Security Advisor to US President Jimmy Carter, received an alarmist call: 220 Soviet nuclear missiles were heading for the United States. A few minutes later, a new phone call from the military command center informs him that, in reality, 2,200 bombs are threatening American soil. Finally, as the adviser was preparing to warn the leader of the "free world" of the imminence of a Russian nuclear bombing, military officials realize that the automated warning system has become entangled. The cold war nearly got very hot because ... of a cheap computer component that did not work properly.

At the time, the term "artificial intelligence" (AI) was not yet in fashion. But Americans and Soviets were already beginning to introduce algorithms into control rooms to make their atomic deterrence more effective. Several incidents, such as that of June 3, 1980, however, showed the limits of this automation.

Advances in the field of AI

Almost forty years have passed and the subject of AI seems to have disappeared from the debate over the nuclear threat, even as algorithms are increasingly ubiquitous at every level of society. A report from the prestigious Stockholm International Peace Research Institute (SIPRI), published Monday, May 6, however, points out that the issue remains relevant.

On the other hand, the threat of a nuclear arms race is far from gone. Donald Trump has promised to modernize their arsenal, North Korea does not seem to want to abandon its nuclear program, and tensions between India and Pakistan - two nuclear powers - regularly make the "one" of the media. On the other hand, technological breakthroughs in the field of AI demonstrate that there is "enormous potential in nuclear power, as in the area of ​​conventional and cyber weapons. 'Machine learning' [the AI ​​which improves by feeding on data, Ed] is excellent for data analysis ", explains Vincent Boulanin, a researcher at SIPRI, who coordinated the study, contacted by France 24 A quality that may be essential, for example, for intelligence or detection of computer attacks.

France, the United States, Russia or China have they already adopted the latest algorithm in their nuclear device? "In truth, we know very little about the adoption of AI in the nuclear system at the moment", admits Vincent Boulanin. Russia is the only power to have mentioned it recently. Russian President Vladimir Putin announced in March 2018 the construction of a fully automated nuclear submarine called Poseidon. Moscow has also formally resurrected and updated, in 2011, a device dating back to the Cold War: the Perimeter system. It is artificial intelligence capable - if certain conditions are met - of initiating a nuclear response after detecting the launch of an atomic bomb by another state. But in the absence of concrete elements, these announcements left the doubting experts.

Earning precious minutes in a crisis

This skepticism stems, in part, from the fact that "in the nuclear field, the adoption of new technologies is traditionally slow because each novelty implies the possibility of new vulnerabilities", summarizes Vincent Boulanin. Nuclear program managers prefer to work on largely outdated computers rather than using state-of-the-art technologies that are at risk of being hacked.

But for him, it's only a matter of time. The promises of AI are too enticing for the nuclear powers to ignore when the technology matures. Its main advantage is that an algorithm is faster than humans to process information. "It is a fundamental variable in the event of a nuclear crisis. It can give politicians more time to make a decision, "explains Vincent Boulanin. Having a few seconds or more minutes to trigger an attack response can save the lives of millions of people.

Artificial intelligence can also make guidance systems "more accurate and flexible" for the report's authors. "This can, in particular, be important for hyper veloces systems that a human can not maneuver," said the French expert Sipri. Several countries are working on prototypes of hypersonic aircraft and missiles capable of flying at more than five times the speed of sound. At this rate, no human can intervene on the trajectory of the missile, while an AI can still correct the shot if necessary.

The dark side of AI in nuclear

But there is also the dark side of artificial intelligence. Automation implies that the man delegated tasks to the machine, which, in the nuclear field, can have serious "moral and ethical" implications, says Page Stoutland, vice-president of the American NGO Nuclear Threat Initiative which collaborated in the Sipri report. It thus warns against the temptation to deploy submarines or drones capable of launching atomic charges because "the respect of the human dignity implies in particular that a machine does not take a decision which can put in danger the life", he writes in the report. It should thus, according to Vincent Boulanin, that "States make a clear statement" on this subject to remove the specter of a robotic hand entitled to press the red button.

Algorithms are also human conceptions that can reproduce the prejudices of their creators. In the United States, the use of AI by the police to try to prevent the risk of recurrence has shown its limits and several studies have pointed out that the machine was racist. In the nuclear field, it is, therefore, "impossible to exclude a risk of escalation or at least instability because of an algorithm that misinterpreted a situation [because the models were poorly designed]", in the report Jean-Marc Rickli, a researcher at the Geneva Center for Security Policy, points out.

Artificial intelligence can also threaten the delicate balance between nuclear powers. "A state that is not sure of its nuclear deterrence will be more tempted to automate its strike force, which also increases the risk of its accidental use," fears Michael Horowitz, political scientist and specialist in defense issues at the University of Pennsylvania who participated in the development of the Sipri report. Thus, the United States, confident of its nuclear dominance, will be more cautious in adopting AI than a minor nuclear power like Pakistan.

Artificial intelligence is, therefore, a double-edged sword in the nuclear field. It can help make the world safer. But "it takes a responsible adoption, it involves taking the time to identify the risks associated with the use of AI, as well as seeking solutions to counter them in advance," concludes Vincent Boulanin. Because the financiers have used the same arguments - speed, reliability - to push as quickly as possible the introduction of the algorithms in the trading rooms. As a result, high-frequency trading has led to several very expensive incidents for investors. And in the nuclear field, it's not just about money.


Top Blog Feeds

No comments:

Post a Comment