The Future Of AI Is War… And Human Extinction As Collateral Damage

The Future Of AI Is War… And Human Extinction As Collateral Damage

Authored by Michael T Klare via TomDispatch.com,

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes.

But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film “WarGames,” a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The “Terminator” movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the U.S. Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending “fire” instructions directly to “shooters,” largely bypassing human control.

“A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp [a military show-of-force] or early engagement,” was how Will Roper, assistant secretary of the Air Force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that “we do need to change the name” as the system evolves, Roper added, “I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just don’t think we can go there.”

And while he can’t go there, that’s just where the rest of us may, indeed, be going.

Mind you, that’s only the start. In fact, the Air Force’s ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced “Jad-C-two”). “JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon… to engage the target,” the Congressional Research Service reported in 2022.

AI and the Nuclear Trigger

Initially, JADC2 will be designed to coordinate combat operations among “conventional” or non-nuclear American forces. Eventually, however, it is expected to link up with the Pentagon’s nuclear command-control-and-communications systems (NC3), potentially giving computers significant control over the use of the American nuclear arsenal. “JADC2 and NC3 are intertwined,” General John E. Hyten, vice chairman of the Joint Chiefs of Staff, indicated in a 2020 interview. As a result, he added in typical Pentagonese, “NC3 has to inform JADC2 and JADC2 has to inform NC3.”

It doesn’t require great imagination to picture a time in the not-too-distant future when a crisis of some sort — say a U.S.-China military clash in the South China Sea or near Taiwan — prompts ever more intense fighting between opposing air and naval forces. Imagine then the JADC2 ordering the intense bombardment of enemy bases and command systems in China itself, triggering reciprocal attacks on U.S. facilities and a lightning decision by JADC2 to retaliate with tactical nuclear weapons, igniting a long-feared nuclear holocaust.

The possibility that nightmare scenarios of this sort could result in the accidental or unintended onset of nuclear war has long troubled analysts in the arms control community. But the growing automation of military C2 systems has generated anxiety not just among them but among senior national security officials as well.

As early as 2019, when I questioned Lieutenant General Jack Shanahan, then director of the Pentagon’s Joint Artificial Intelligence Center, about such a risky possibility, he responded, “You will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense, but there is one area where I pause, and it has to do with nuclear command and control.” This “is the ultimate human decision that needs to be made” and so “we have to be very careful.” Given the technology’s “immaturity,” he added, we need “a lot of time to test and evaluate [before applying AI to NC3].”

In the years since, despite such warnings, the Pentagon has been racing ahead with the development of automated C2 systems. In its budget submission for 2024, the Department of Defense requested $1.4 billion for the JADC2 in order “to transform warfighting capability by delivering information advantage at the speed of relevance across all domains and partners.” Uh-oh! And then, it requested another $1.8 billion for other kinds of military-related AI research.

Pentagon officials acknowledge that it will be some time before robot generals will be commanding vast numbers of U.S. troops (and autonomous weapons) in battle, but they have already launched several projects intended to test and perfect just such linkages. One example is the Army’s Project Convergence, involving a series of field exercises designed to validate ABMS and JADC2 component systems. In a test held in August 2020 at the Yuma Proving Ground in Arizona, for example, the Army used a variety of air- and ground-based sensors to track simulated enemy forces and then process that data using AI-enabled computers at Joint Base Lewis McChord in Washington state. Those computers, in turn, issued fire instructions to ground-based artillery at Yuma. “This entire sequence was supposedly accomplished within 20 seconds,” the Congressional Research Service later reported.

Less is known about the Navy’s AI equivalent, “Project Overmatch,” as many aspects of its programming have been kept secret. According to Admiral Michael Gilday, chief of naval operations, Overmatch is intended “to enable a Navy that swarms the sea, delivering synchronized lethal and nonlethal effects from near-and-far, every axis, and every domain.” Little else has been revealed about the project.

“Flash Wars” and Human Extinction

Despite all the secrecy surrounding these projects, you can think of ABMS, JADC2, Convergence, and Overmatch as building blocks for a future Skynet-like mega-network of super-computers designed to command all U.S. forces, including its nuclear ones, in armed combat. The more the Pentagon moves in that direction, the closer we’ll come to a time when AI possesses life-or-death power over all American soldiers along with opposing forces and any civilians caught in the crossfire.

Such a prospect should be ample cause for concern. To start with, consider the risk of errors and miscalculations by the algorithms at the heart of such systems. As top computer scientists have warned us, those algorithms are capable of remarkably inexplicable mistakes and, to use the AI term of the moment, “hallucinations” — that is, seemingly reasonable results that are entirely illusionary. Under the circumstances, it’s not hard to imagine such computers “hallucinating” an imminent enemy attack and launching a war that might otherwise have been avoided.

And that’s not the worst of the dangers to consider. After all, there’s the obvious likelihood that America’s adversaries will similarly equip their forces with robot generals. In other words, future wars are likely to be fought by one set of AI systems against another, both linked to nuclear weaponry, with entirely unpredictable — but potentially catastrophic — results.

Not much is known (from public sources at least) about Russian and Chinese efforts to automate their military command-and-control systems, but both countries are thought to be developing networks comparable to the Pentagon’s JADC2. As early as 2014, in fact, Russia inaugurated a National Defense Control Center (NDCC) in Moscow, a centralized command post for assessing global threats and initiating whatever military action is deemed necessary, whether of a non-nuclear or nuclear nature. Like JADC2, the NDCC is designed to collect information on enemy moves from multiple sources and provide senior officers with guidance on possible responses.

China is said to be pursuing an even more elaborate, if similar, enterprise under the rubric of “Multi-Domain Precision Warfare” (MDPW). According to the Pentagon’s 2022 report on Chinese military developments, its military, the People’s Liberation Army, is being trained and equipped to use AI-enabled sensors and computer networks to “rapidly identify key vulnerabilities in the U.S. operational system and then combine joint forces across domains to launch precision strikes against those vulnerabilities.”

Picture, then, a future war between the U.S. and Russia or China (or both) in which the JADC2 commands all U.S. forces, while Russia’s NDCC and China’s MDPW command those countries’ forces. Consider, as well, that all three systems are likely to experience errors and hallucinations. How safe will humans be when robot generals decide that it’s time to “win” the war by nuking their enemies?

If this strikes you as an outlandish scenario, think again, at least according to the leadership of the National Security Commission on Artificial Intelligence, a congressionally mandated enterprise that was chaired by Eric Schmidt, former head of Google, and Robert Work, former deputy secretary of defense. “While the Commission believes that properly designed, tested, and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit, the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability,” it affirmed in its Final Report. Such dangers could arise, it stated, “because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems on the battlefield” — when, that is, AI fights AI.

Though this may seem an extreme scenario, it’s entirely possible that opposing AI systems could trigger a catastrophic “flash war” — the military equivalent of a “flash crash” on Wall Street, when huge transactions by super-sophisticated trading algorithms spark panic selling before human operators can restore order. In the infamous “Flash Crash” of May 6, 2010, computer-driven trading precipitated a 10% fall in the stock market’s value. According to Paul Scharre of the Center for a New American Security, who first studied the phenomenon, “the military equivalent of such crises” on Wall Street would arise when the automated command systems of opposing forces “become trapped in a cascade of escalating engagements.” In such a situation, he noted, “autonomous weapons could lead to accidental death and destruction at catastrophic scales in an instant.”

At present, there are virtually no measures in place to prevent a future catastrophe of this sort or even talks among the major powers to devise such measures. Yet, as the National Security Commission on Artificial Intelligence noted, such crisis-control measures are urgently needed to integrate “automated escalation tripwires” into such systems “that would prevent the automated escalation of conflict.” Otherwise, some catastrophic version of World War III seems all too possible. Given the dangerous immaturity of such technology and the reluctance of Beijing, Moscow, and Washington to impose any restraints on the weaponization of AI, the day when machines could choose to annihilate us might arrive far sooner than we imagine and the extinction of humanity could be the collateral damage of such a future war.

*  *  *

Follow TomDispatch on Twitter and join us on Facebook. Check out the newest Dispatch Books, John Feffer’s new dystopian novel, Songlands (the final one in his Splinterlands series), Beverly Gologorsky’s novel Every Body Has a Story, and Tom Engelhardt’s A Nation Unmade by War, as well as Alfred McCoy’s In the Shadows of the American Century: The Rise and Decline of U.S. Global Power, John Dower’s The Violent American Century: War and Terror Since World War IIand Ann Jones’s They Were Soldiers: How the Wounded Return from America’s Wars: The Untold Story.

Michael T. Klare, a TomDispatch regular, is the five-college professor emeritus of peace and world security studies at Hampshire College and a senior visiting fellow at the Arms Control Association. He is the author of 15 books, the latest of which is All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change. He is a founder of the Committee for a Sane U.S.-China Policy.

Tyler Durden
Fri, 07/21/2023 – 02:00

 

 Read More 

Nigeria holds Olympic champions Canada with penalty save in Women’s World Cup

Christine Sinclair had a penalty saved as Olympic champions Canada were held 0-0 in their opener by Nigeria, who saw Deborah Abiodun pick up the first red card of this Women’s World Cup. 

Read More 

  

Russia testing Biden resolve in Syria amid string of ‘unprofessional’ incidents

Russian forces in Syria have continued to harass U.S. forces and interrupt American operations in the country, engaging in “unprofessional” conduct that the U.S. has so far been unable to deter.

“The Russians are clearly attempting to harass U.S. efforts, surveillance efforts and military efforts, inside of Syria,” Bill Roggio, managing editor of Long War Journal, told Fox News Digital. “The Russians are attempting to put pressure on the U.S. to leave Syria in order for the Russians to gain further prominence there.”

Roggio’s comments come just days after a Russian fighter jet buzzed a U.S. surveillance craft in the skies over Syria, an incident that put the lives of the four American crew members in danger, according to the Pentagon.

RUSSIAN FIGHTER JET BUZZES MANNED US WARPLANE OVER SYRIA, THREATENING CREW

The close call was just one in a string of incidents in Syria in which Russian troops have tested their American counterparts with seemingly no fear of consequences. Earlier this month, the Pentagon released footage that showed Russian aircraft flying close to and deploying flares around multiple U.S. drones. A day earlier, Russian SU-35 fighters flew right into the flight path of U.S.-operated MO-9 Reaper drones and forced the unmanned aircraft to take evasive action.

The Pentagon later revealed that the harassed drones were the same aircraft that were used the day before on a strike in eastern Syria that killed ISIS leader Usamah al-Muhajir.

In two more incidents this month, Russia flew an intelligence mission over a U.S. military garrison and intercepted a MQ-9 drone the same day a Russian aircraft flew over the al-Tanf garrison, which houses U.S. forces in northern Syria, for an extended period of time, violating the deconfliction line agreed upon by the two countries.

A spokesperson for the National Security Council told Fox News Digital that Russia’s recent actions violate “established protocols and international norms, threatening the safety of both U.S. and Russian personnel.:

“Russia’s unannounced and uncoordinated reconnaissance flight on Friday morning is yet another example of its continued unprofessional and dangerous behavior in Syria,” the spokesperson said. “Two weeks ago, Russian military aircraft exhibited the same unsafe and unprofessional activity: its fighter jets were harassing U.S. drones during a routine mission against ISIS targets. They dropped multiple flares and engaged afterburner directly in front of U.S. aircraft, creating unsafe turbulence and hazards that forced our aircraft to take evasive maneuvers.”

All the incidents came after Russia’s high-profile downing of an American drone over the Black Sea in March, with the Russian fighter aircraft flying close enough to the drone to bump it and send it tumbling to earth.

The U.S. military has responded to the string of incidents by urging Russian forces to “cease this reckless behavior and adhere to the standards of behavior expected of a professional air force, so we can resume our focus on the enduring defeat of ISIS.”

US CENTCOM SAYS IT KILLED ISIS LEADER IN SYRIA AIRSTRIKE, NO CIVILIAN CASUALTIES

“The U.S. Air Forces Central remains committed to ensuring the safety and security of its personnel and assets and continues to work closely with partners and allies to address these incidents and prevent any escalation of tensions in the region,” said Lt. Gen. Alexus Grynkewich, commander of the 9th Air Force and combined forces air component commander for U.S. Central Command. “The safety of military personnel and the success of the mission against ISIS depend on the professional and responsible conduct of all forces operating in the region.”

Asked this month if the U.S. would send more troops to the region in a bid to help deter the Russian provocations, Chairman of the Joint Chiefs of Staff Gen. Mark Milley downplayed the string of incidents.

“There is a bit of an uptick, but I wouldn’t overstate it too much,” Milley told reporters. “I think that our forces have adequate rules of engagement and authorities provided to defend themselves.”

But Roggio says there is not much the U.S. can do to deter the Russian actions, noting that any direct engagement with Russian forces would risk a devastating escalation.

“I don’t see many options,” Roggio said. “You can deploy F-16s and F-22s in the air, but if the Russians ignore them, there’s very little those things can do short of jamming the Russian aircraft, putting a shot across the bow, things like that. But that is very dangerous.”

NEW VIDEO SHOWS RUSSIAN FIGHTER JETS HARASSING AMERICAN DRONES OVER SYRIA, US AIR FORCE SAYS

However, while there might not be much the U.S. can do to deter Russian harassment, Roggio says it is important for the U.S. to maintain its mission in Syria.

“The Islamic State still exists, it still has a significant presence in Syria,” he said.

Roggio pointed out that while two presidents have claimed that the Islamic State has been decimated, the terror group still has “thousands of fighters, still has leadership and is still a threat.”

“If the U.S. withdraws, it will in fact give an opening for the Islamic State to regenerate its strength,” he said.

Joel Rubin, a former deputy assistant secretary of state for the Obama administration, agreed that U.S. options to deter Russian behavior in Syria are limited, arguing that “the U.S. should not be fighting Russia.”

Rubin said Russia is likely to continue to test the U.S., especially after communication between the two countries further broke down amid the Russian invasion of Ukraine.

SYRIA’S REFUGEE CRISIS: WHERE IT STANDS NOW

“Russia views Syria as part of its sphere of influence. It has become a de facto power in the heart of the Middle East as a result … and they don’t want to give up that beachhead,” Rubin said. “Any attempts they can make to drive the United States out, that’s part of where they want to go.”

Rubin also pointed out that Russia is looking to test American capabilities in an attempt to gain information it may find useful down the road.

“They do want to learn about capabilities,” Rubin said. “They do want to see how far they can go to push the United States. And can they on the tactical level obtain information about the U.S. and U.S. forces and capabilities that they could potentially use elsewhere?” 

Rubin added that he doesn’t see Russia giving up its tactics in Syria anytime soon, noting that the country has basically turned into a Russian colony and that the Russians will continue to view it as a vital part of its sphere of influence.

“Syria is a satellite state of Russia and also provides the one port on the Mediterranean for Russia,” Rubin said. “That’s what their endgame is, this ability to project power and have dominion and maintain their colony, basically.”

Despite the challenges Russia will continue to pose to the region, Roggio says it is important for the U.S. to not lose its will, though he noted that current U.S. policy can only do so much to contain the terrorism threat from Syria.

“What this administration and the last two administrations has done in Syria, Iraq and Somalia, is put a lid on the problem,” Roggio said.

“They have been very comfortable with that. In some respects, you could say it works,” Roggio continued, adding that it’s possible the policy of simply containing the problem could end up in disaster.

“The problem is … they’re resilient, they’re good at recruiting, they’re good at regenerating their leadership, and they’re good at waiting,” Roggio said. “I’m not going to argue that we should go all in on the ground, but what I am saying is what we’re doing now isn’t solving the problem.”

Meanwhile, the National Security Council told Fox News Digital that the fight against ISIS will continue.

“We remain focused on the mission to defeat ISIS, as evidenced by our recent strike against a ISIS leader in Syria last week,” the spokesperson said.

​ 

Read More 

Bank Lobbying Climbs Nearly 20% Ahead of New Fed Capital Rules

The banking industry ramped up lobbying during the second quarter of this year as a financial contagion infected four institutions and increased the threat of additional regulation from Washington.

 

 Read More 

Bank Lobbying Climbed Nearly 20% as New Fed Rules Loom

The banking industry ramped up lobbying during the second quarter of this year as a financial contagion infected four institutions and increased the threat of additional regulation from Washington.

 

 Read More 

Bank Lobbying Climbed Nearly 20% as New Fed Rules Loom

The banking industry ramped up lobbying during the second quarter of this year as a financial contagion infected four institutions and increased the threat of additional regulation from Washington.

 

 Read More 

Bank Lobbying Climbed Nearly 20% as New Fed Rules Loom

The banking industry ramped up lobbying during the second quarter of this year as a financial contagion infected four institutions and increased the threat of additional regulation from Washington.

 

 Read More 

Bank Lobbying Climbed Nearly 20% as New Fed Rules Loom

The banking industry ramped up lobbying during the second quarter of this year as a financial contagion infected four institutions and increased the threat of additional regulation from Washington.

 

 Read More 

Bank Lobbying Climbed Nearly 20% as New Fed Rules Loom

The banking industry ramped up lobbying during the second quarter of this year as a financial contagion infected four institutions and increased the threat of additional regulation from Washington.

 

 Read More 

Bank Lobbying Climbed Nearly 20% as New Fed Rules Loom

The banking industry ramped up lobbying during the second quarter of this year as a financial contagion infected four institutions and increased the threat of additional regulation from Washington.

 

 Read More