Menu

How the Pentagon’s Skynet Would Automate War

Mass surveillance, drone swarms, cyborg soldiers, telekinesis, synthetic organisms, and laser beams will determine future conflict by 2030.

 

Pentagon officials are worried that the US military is losing its edge compared to competitors like China, and are willing to explore almost anything to stay on top—including creating watered-down versions of the Terminator.

Due to technological revolutions outside its control, the Department of Defense (DoD) anticipates the dawn of a bold new era of automated war within just 15 years. By then, they believe, wars could be fought entirely using intelligent robotic systems armed with advanced weapons.

Last week, US defense secretary Chuck Hagel ann​ounced the ‘Defense Innovation Initiative’—a sweeping plan to identify and develop cutting edge technology breakthroughs “over the next three to five years and beyond” to maintain global US “mili​tary-technological superiority.” Areas to be covered by the DoD programme include robotics, autonomous systems, miniaturization, Big Data and advanced manufacturing, including 3D printing.

But just how far down the rabbit hole Hagel’s initiative could go—whether driven by desperation, fantasy or hubris—is revealed by an overlooked Pentagon-funded study, published quietly in mid-September by the DoD National Defense University’s (NDU) Center for Technology and National Security Policy in Washington DC.

The Pentagon plans to monopolize imminent “transformational advances” in nanotechnology, robotics, and energy

The 72-page d​ocument throws detailed light on the far-reaching implications of the Pentagon’s plan to monopolize imminent “transformational advances” in biotechnology, robotics and artificial intelligence, information technology, nanotechnology, and energy.

Hagel’s initiative is being overseen by deputy defense secretary Robert O. Work, lead author of a r​eport released last January by the Center for a New American Security (CNAS), “20YY: Preparing for War in the Robotic Age.”

Work’s report is also cited heavily in the new study published by the NDU, a Pentagon-funded higher education institution that trains US military officials and develops government national security strategy and defense policies.

The NDU study warns that while accelerating technological change will “flatten the world economically, socially, politically, and militarily, it could also increase wealth inequality and social stress,” and argues that the Pentagon must take drastic action to avoid the potential decline of US military power: “For DoD to remain the world’s preeminent military force, it must redefine its culture and organizational processes to become more networked, nimble, and knowledge-based.”

The authors of the NDU paper, Dr James Kadtke and Dr Linton Wells, are seasoned long-term Pentagon advisers, both affiliated with the NDU’s technology center which produces research “supporting the Office of the Secretary of Defense, the Services, and Congress.”

Kadtke was previously a senior official at the White House’s National Nanotechnology Coordinating Office, while Wells—who served under Paul Wolfowitz as DoD chief information officer and deputy assistant defense secretary—was until this June NDU’s Force Transformation Chair.

Wells also chairs a little-known group known as the ‘Highlands Forum,’ which is run by former Pentagon staffer Richard O’Neill on behalf of the DoD. The Fo​rum brings together military and information technology experts to explore the defense policy issues arising from the impact of the internet and globalization.

Explaining the Highlands Forum process in 2006 to Gover​nment Executivemagazine, Wells described the Forum as a DoD-sponsored “idea engine” that “generates ideas in the minds of government people who have the ability to act through other processes… What happens out of Highlands is you get people who come back with an idea and say, ‘Now how can I cause this to happen?'”

Big Data’s Big Brother
 

A key area emphasized by the Wells and Kadtke study is improving the US intelligence community’s ability to automatically analyze vast data sets without the need for human involvement.

Pointing out that “sensitive personal information” can now be easily mined from online sources and social media, they call for policies on “Personally Identifiable Information (PII) to determine the Department’s ability to make use of information from social media in domestic contingencies”—in other words, to determine under what conditions the Pentagon can use private information on American citizens obtained via data-mining of Facebook, Twitter, LinkedIn, Flickr and so on.

Their study argues that DoD can leverage “large-scale data collection” for medicine and society, through “monitoring of individuals and populations using sensors, wearable devices, and IoT [the ‘Internet of Things’]” which together “will provide detection and predictive analytics.” The Pentagon can build capacity for this “in partnership with large private sector providers, where the most innovative solutions are currently developing.”

In particular, the Pentagon must improve its capacity to analyze data sets quickly, by investing in “automated analysis techniques, text analytics, and user interface techniques to reduce the cycle time and manpower requirements required for analysis of large data sets.”

Kadtke and Wells want the US military to take advantage of the increasing interconnection of people and devices via the new ‘Internet of Things’ through the use of “embedded systems” in “automobiles, factories, infrastructure, appliances and homes, pets, and potentially, inside human beings.” Due to the advent of “cloud robotics… the line between conventional robotics and intelligent everyday devices will become increasingly blurred.”

Cloud robotics, a term coined by Google’s new robotics chief, James Kuffner, allows individual robots to augment their capabilities by connecting through the internet to share online resources and collaborate with other machines. By 2030, nearly every aspect of global society could become, in their words, “instrumented, networked, and potentially available for control via the Internet, in a hierarchy of cyber-physical systems.”

Yet the most direct military application of such technologies, the Pentagon study concludes, will be in “Command-Control-Communications, Computers and Intelligence-Surveillance-Reconnaissance (C4ISR)”—a field led by “world-class organizations such as the National Security Agency (NSA).”

Clever Kill Bots in the Cloud

Within this context of Big Data and cloud robotics, Kadtke and Wells enthuse that as unmanned robotic systems become more intelligent, the cheap manufacture of “armies of Kill Bots that can autonomously wage war” will soon be a reality. Robots could also become embedded in civilian life to perform “surveillance, infrastructure monitoring, police telepresence, and homeland security applications.”

The main challenge to such robot institutionalization will come from a “political backlash” to robots being able to determine by themselves when to kill.

To counter public objections, they advocate that the Pentagon should be “highly proactive” in ensuring “it is not perceived as creating weapons systems without a ‘human in the loop.’ It may be that DoD should publicly self-limit its operational doctrine on the use of such systems to head off public or international backlash to its development of autonomous systems.”

Despite this PR move, they recommend that DoD should still “remain ahead of the curve” by developing “operational doctrine for forces made up significantly or even entirely of unmanned or autonomous elements.” [emphasis added]

The rationale is to “augment or substitute for human operators” as much as possible, especially for missions that are “hazardous,” “impractical,” or “impossible” for humans (like, perhaps, all wars?). In just five years, the study reports, Pentagon research to improve robot intelligence will bear “significant advances.”

Skynet by 2020s?

Perhaps the most disturbing dimension among the NDU study’s insights is the prospect that within the next decade, artificial intelligence (AI) research could spawn “strong AI”—or at least a form of “weak AI” that approximates some features of the former.

Strong AI should be able to simulate a wide range of human cognition, and include traits like consciousness, sentience, sapience, or self-awareness. Many now believe, Kadtke and Wells, observe, that “strong AI may be achieved sometime in the 2020s.”

They report that a range of technological advances support “this optimism,” especially that “computer processors will likely reach the computational power of the human brain sometime in the 2020s”—Intel aims to reach this milestone by 201​8. Other relevant advances in development include “full brain simulations, neuro-synaptic computers, and general knowledge representation systems such as IBM Watson.”

As the costs of robotics manufacturing and cloud computing plummet, the NDU paper says, AI advances could even allow for automation of high-level military functions like “problem solving,” “strategy development” or “operational planning.”

“In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries.”

“In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries,” the paper says (thankfully, no plans to add ‘living tissue’ on the outside are mentioned).

The study thus foresees the Pentagon playing a largely supervisory role over autonomous machines as increasingly central to all dimensions of warfare—from operational planning to identifying threats via surveillance and social media data-mining; from determining enemy targets to actually pursuing and executing them.

There is no soul-searching, though, about the obvious risks of using AI to automate such core elements of military planning and operations, beyond the following oblique sentence: “One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smart automated systems, for which there is little practical experience.”

But if the reservations of billionaire tech entrepreneur Ellon Musk are anything to go by, the Pentagon’s hubris is deeply amiss. Musk, an early investor in the AI company DeepMind now owned by Google, has warned of “something dangerous” happening in five years due to “close to exponential” growth of AI at the firm—and some AI experts agr​ee.

Synthetic Genetically Enhanced Laser-Armed Prosthetic People

As if this wasn’t disturbing enough, Kadtke and Wells go on to chart significant developments across a wide range of other significant technologies. They point to the development of Directed Energy Weapons (DEW) that project electromagnetic radiation as laser light, and which are already being deployed in test form.

This August, USS Ponce deployed with an operational laser—a matter that was only rep​orted in the last few days. DEWs, the NDU authors predict, “will be a very disruptive military technology” due to “unique characteristics, such as near-zero flight time, high accuracy, and an effectively infinite magazine.” The Pentagon plans to widely deploy DEWs aboard ships in a few years.

The Pentagon also wants to harvest technologies that could ‘upgrade’ human physical, psychological, and cognitive makeup. The NDU paper catalogues a range of relevant fields, including “personalized (genetic) medicine, tissue and organ regeneration via stem cells, implants such as computer chips and communication devices, robotic prosthetics, direct brain-machine interfaces, and potentially direct brain-brain communications.”

Another area experiencing breakthrough develop​ments is synthetic biology (SynBio). Scientists have recently created cells with DNA composed of non-natural amino acids, opening the door to create entirely new “designer life forms,” the Pentagon report enthuses, and to engineer them with “specialized and exotic properties.”

Kadtke and Wells flag up a recent Pentagon assessment of current SynBio research suggesting “great promise for the engineering of synthetic organisms” useful for a range of “defense relevant applications.”

It is already possible to replace organs with artificial electro-mechanical devices for a wide range of body parts. Citing ongoing US Army research on “cognition and neuro-ergonomics,” Kadtke and Wells forecast that: “Reliable artificial lungs, ear and eye implants, and muscles will all likely be commercially available within 5 to 10 years.” Even more radically, they note the emerging possibility of using stem cells to regenerate every human body part.

Meshing such developments with robotics has further radical implications. The authors highlight successful demonstrations of implantation of silicon memory and processors into the brain, as well as “purely thought controlled devices.” In the long-term, these breakthroughts could make ‘wearable devices’ like Google Glass look like ancient fossils, superceded by “distributed human-machine systems employing brain-machine interfaces and analog physiomimetic processors, as well as hybrid cybernetic systems, which could provide seamless and artificially enhanced human data exploration and analysis.”

We’re all terror suspects

Taken together, the “scientific revolutions” catalogued by the NDU report—if militarized—would grant the Department of Defense (DoD) “disruptive new capabilities” of a virtually totalitarian quality.

As I was told by former NSA senior executive Thomas D​rake, the whistleblower who inspired Edward Snowden, ongoing Pentagon-funded research on data-mining feeds directly into fine-tuning the algorithms used by the US intelligence community to identify not just ‘terror suspects’, but also targets for the CIA’s drone-strike kill lists.

Nearly​ half the people on the US government’s terrorism watch list of “known or suspected terrorists” have “no recognized terrorist group affiliation,” and more th​an half the victims of CIA drone-strikes over a single year were “assessed” as “Afghan, Pakistani and unknown extremists”—among others who were merely “suspected, associated with, or who probably” belonged to unidentified militant groups. Multiple stu​dies show that a substantive number of drone strike victims are civilians—and a secret Obama administration me​mo released this summer under Freedom of Information reveals that the drone programme authorizes the killing of civilians as inevitable collateral damage.

Indeed, flawed assumptions in the Pentagon’s classification systems for threat assessment mean that even “nonviolent political activists” might be conflated with pote​ntial ‘extremists’, who “support political violence” and thus pose a threat to US interests.

It is far from clear that the Pentagon’s Skynet-esque vision of future warfare will actually reach fruition. That the aspiration is being pursued so fervently in the name of ‘national security,’ in the age of austerity no less, certainly raises questions about whether the most powerful military in the world is not so much losing its edge, as it is losing the plot.

Nafeez Ahmed, Ph.D. is an investigative journalist and international security scholar. He is author of A User’s Guide to the Cris​is of Civilization and the sci-fi thriller, Zero​ Point.

Source: https://www.vice.com/en/article/8qxvvg/how-the-pentagons-skynet-would-automate-war