AI Increases The Risk Of Nuclear Annihilation

AI Increases The Risk Of Nuclear Annihilation

Authored by John Mac Ghlionn via The Epoch Times,

OpenAI, the company responsible for ChatGPT, recently announced the creation of a new team with a very specific task: to stop AI models from posing “catastrophic risks” to humanity.

Preparedness, the aptly titled team, will be overseen by Aleksander Madry, a machine-learning expert and Massachusetts Institute of Technology-affiliated researcher. Mr. Madry and his team will focus on various threats, most notably those of “chemical, biological, radiological and nuclear” variety. These might seem like far-fetched threats—but they really shouldn’t.

As the United Nations reported earlier this year, the risk of countries turning to nuclear weapons is at its highest point since the Cold War. This report was published before the horrific events that occurred in Israel on Oct. 7. A close ally of Vladimir Putin’s, Nikolai Patrushev, recently suggested that the “destructive” policies of “the United States and its allies were increasing the risk that nuclear, chemical or biological weapons would be used,” according to Reuters.

Merge AI with the above weapons, particularly nuclear weapons, cautions Zachary Kallenborn, a research affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), and you have a recipe for unmitigated disaster.

Mr. Kallenborn has sounded the alarm, repeatedly and unapologetically, on the unholy alliance between AI and nuclear weapons. Not one to mince words, the researcher warned, “If artificial intelligences controlled nuclear weapons, all of us could be dead.”

He isn’t exaggerating. Exactly 40 years ago, as Mr. Kallenborn, a policy fellow at the Schar School of Policy and Government, described, Stanislav Petrov, a Soviet Air Defense Forces lieutenant colonel, was busy monitoring his country’s nuclear warning systems. All of a sudden, according to Mr. Kallenborn, “the computer concluded with the highest confidence that the United States had launched a nuclear war.” Mr. Petrov, however, was skeptical, largely because he didn’t trust the current detection system. Moreover, the radar system lacked corroborative evidence.

Thankfully, Mr. Petrov concluded that the message was a false positive and opted against taking action. Spoiler alert: The computer was completely wrong, and the Russian was completely right.

“But,” noted Mr. Kallenborn, a national security consultant, “if Petrov had been a machine, programmed to respond automatically when confidence was sufficiently high, that error would have started a nuclear war.”

Furthermore, he suggested, there’s absolutely “no guarantee” that certain countries “won’t put AI in charge of nuclear launches,” because international law “doesn’t specify that there should always be a ‘Petrov’ guarding the button.”

“That’s something that should change, soon,” Mr. Kallenborn said.

He told me that AI is already reshaping the future of warfare.

Artificial intelligence, according to Mr. Kallenborn, “can help militaries quickly and more effectively process vast amounts of data generated by the battlefield; make the defense industrial base more effective and efficient at producing weapons at scale, and may be able to improve weapons targeting and decision-making.”

Take China, arguably the biggest threat to the United States, for example, and its AI-powered military applications. According to a report out of Georgetown University, in the not-so-distant future, Beijing may use AI not just to assist during wartime but to actually oversee all acts of warfare.

This should concern all readers.

“If the launch of nuclear weapons is delegated to an autonomous system,” Mr. Kallenborn fears that they “could be launched in error, leading to an accidental nuclear war.”

“Adding AI into nuclear command and control,” he said, “may also lead to misleading or bad information.”

He’s right. AI depends on data, and sometimes data are wildly inaccurate.

Although there isn’t one particular country that keeps Mr. Kallenborn awake at night, he’s worried by “the possibility of Russian President Vladimir Putin using small nuclear weapons in the Ukraine conflict.” Even limited nuclear usage “would be quite bad over the long-term” because “the nuclear taboo” would be removed, thus “encouraging other states to be more cavalier with nuclear weapons usage.”

“Nuclear weapons,” according to Mr. Kallenborn, are the “biggest threat to humanity.”

“They are the only weapon in existence that can cause enough harm to truly cause human extinction,” he said.

As mentioned earlier, throwing AI into the nuclear mix appears to increase the risk of mass extinction. The warnings of Mr. Kallenborn, a well-respected researcher who has dedicated years of his life to researching the evolution of nuclear warfare, carry a great deal of credibility.

Tyler Durden
Fri, 12/15/2023 – 03:30


 Read More