National Security

China’s Emerging Artificial Intelligence Army and the Danger of Smart Weapons

“Any serviceman with any field experience at all understands that an encounter with the enemy is no time to be writing a doctoral thesis. Decisions must be made quickly – instinct overrides analysis. The number of variables, knowledge gaps, and limits on time, preclude any effective scrutiny of the scenario.”

News of the next imminent leap in warfare technology came to the forefront once again in a recent Congressional hearing.

On January 9, William Carter, the deputy director for the Center for Strategic and International Studies (CSIS) testified before the House Armed Services Committee regarding the advances in computing and how one of America’s adversaries is leaving the U.S. Military in the dust.

Carter presented an assessment on China’s growing investment in quantum computing and artificial intelligence learning programs. Investment that is intended to help the nation fight wars more effectively. Without going into too much detail, quantum computing differs from the “traditional” model of computers in use today, which encode data in the form of binary digits (1’s and 0’s). Quantum computers encode in quantum bits. This, in theory, allows for an exponential increase in the ability to process complex data.

According to Clark, “China sees offensive […] artificial intelligence, and quantum technologies as key to enabling the PLA [People’s Liberation Army] to win wars in future, high-tech conditions and offset the advantages of the U.S. military, and has made significant strides in all of these areas.”

While many of the important implications of this news were unpacked during the hearing, one element stuck out. Advancements in quantum computing differ from other improvements to warfighting in that they do not necessarily increase the power of weapons or the effectiveness of equipment or other military hardware. Increased computing power is designed to simply make the processes of attack and defense increasingly efficient and accurate. This distinction is an important one. It goes to the heart of a long trend that has come to characterize military technology advancement.

From this perspective, foreknowledge, or the ability to give an accurate guess of what will happen in the future, is the single most important asset in conflict.

To help get a grasp of the idea at hand, we can turn to a colorful example from Hollywood. The above principle in action was presented in the 2014 Tom Cruise production Edge of Tomorrow, a forgettable film with an intriguing premise.

Cruise and his co-protagonist, played by Emily Blunt, are soldiers of the future fighting an alien invasion of Earth. A series of mysterious time-altering phenomenon give both Cruise and Blunt’s characters the unique ability to regenerate in the past after being neutralized in battle. Thus, the warriors are able to replay each skirmish with the enemy over and over until they find the correct strategy to win. The comrades observing them fight believe they have near-prophetic ability to predict what the enemy will do next. Both become increasingly proficient in battle with each “re-play” — they come closer and closer to complete victory.

What is important to highlight is that Blunt and Cruise’s characters’ incremental increase in effectiveness throughout the plot is not because they become better soldiers. Rather, it is because they progressively learn more and more about the battles they fight. They know which threats pose a danger, and which ones do not. They know which enemies are necessary to eliminate and which ones they can ignore. In short, their aptitude in war fighting is due to super-accurate filtering.

From this perspective, foreknowledge, or the ability to give an accurate guess of what will happen in the future, is the single most important asset in conflict. This is true from both a defensive and offensive point of view. This is the primary goal of the entire military and national intelligence industry.

The computing power of these weapons quickly sorts the irrelevant from the dangerous, and focuses resources only on the necessary.

Most of the time, the prediction work of intelligence is meant to provide strategic context. It seeks to answer questions like: What is the enemy most likely to target? What will his capabilities be? As planning descends from the broader strategy to the specificities of the tactics, the ability to make these predictions begins to dissipate. Any serviceman with any field experience at all understands that an encounter with the enemy is no time to be writing a doctoral thesis. Decisions must be made quickly – instinct overrides analysis. The number of variables, knowledge gaps, and limits on time, preclude any effective scrutiny of the scenario.

But what if we could overcome these challenges? What if the limits of human analysis could be replaced by the exponentially faster and more accurate computing power of machines?

This is where artificially intelligent warfighting comes in. Weapon systems infused with AI are deployed with the ability to apply this “filtering”, making their operations super-efficient and accurate. Unlike Cruise and Blunt’s characters however, they do not need to experience any given battle ahead of time, but can assess the most efficient course of action near-instantaneously through advanced computing power.

Tracing the recent progress of smart weapons shows a series of incremental steps in increasing the capabilities of these tools.

Smart weapons began with the relatively simple goal of improving the effects of an action initiated by a human warfighter. This is the principle present in smart bombs for instance. Laser guided explosives have been around, at least in the experimental realm, since the 1960’s, and continue to be used in today’s conflicts. The next step was to start giving weapons a level of autonomy. And this is where things entered a whole new domain.

Governments have been openly discussing the implementation of autonomous decision making in defense systems for the past several years. One important area that has already seen a lot of advancement in this regard is aerial drone production. Each successive generation of drones has received more and more autonomy and control. Military drones today can take off, land, and even perform certain operational functions with complete independence.

The development of new defensive systems has been all about autonomy. Consider the recently implemented WindGuard anti-missile system, a joint development of Rafael Advanced Defense Systems and Israel Aircraft Industries’ Elta Group. The system neutralizes incoming projectiles without any human input. The next generation anti-missile system dubbed David’s Sling, a development of American defense contractor Raytheon, in collaboration with Israeli engineers, became operational earlier this year, and brought full automation to long-range missile defense.

The success of these systems is not a function of firepower, range, or any other traditional measuring factor, but rather filtering accuracy. The computing power of these weapons quickly sorts the irrelevant from the dangerous, and focuses resources only on the necessary.

Now that automation accuracy is fully underway in defensive tools, the next “logical” step is to automate offensive weapons. This prospect has caused quite a stir in the international community, and not a small amount of controversy.

There are two points to consider as the world anticipates automated killing systems becoming the norm.

First are the obvious moral implications. Automation in offensive actions would essentially mean handing life-taking decisions to machines. While death is an unavoidable fact, warfighters of ethically oriented nations still do not take the act of large scale killing lightly, especially when the potential of collateral damage is at play.

The second issue is the risk that automated weapons will bring the destruction wrought by armed conflict to a whole new scale, with potentially devastating consequences. By limiting human involvement, there is a serious risk that warfare will become too destructive, and too fast to control.

Both of these concerns were best highlighted in an open letter to the UN Convention on Certain Conventional Weapons, signed by the CEOs of 100 international tech companies. The letter brought up the dangers of “Lethal Autonomous Weapon Systems.” Those who signed the letter requested the Conference’s member countries find “means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.”

Asking countries to sign a treaty banning a weapon that doesn’t yet exist means asking them to forgo a potentially useful tool to defend against threats and save the lives of their own citizens.

Lethal autonomous weapons threaten to become the third revolution in warfare. Regarding the potential dangers of automated systems, the letter asserts that these weapons, once developed “will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

On this backdrop, it is easy to understand why US policy may be a bit antsy about rival China basing its entire defense apparatus on artificial intelligence. On the one hand, it seems (at least from what is publicly known) the US and many other Western nations may be falling behind on integrating AI into their militaries.

On the other, the United States openly investing in this development may trigger an arms race for automation that will drive the threats from these systems further out of control. Of course, it is exceedingly difficult to achieve any international consensus on these issues. Asking countries to sign a treaty banning a weapon that doesn’t yet exist means asking them to forgo a potentially useful tool to defend against threats and save the lives of their own citizens.

However some hope may exist. As one observer has pointed out, there is the potential for fostering an “international taboo” regarding automation, similar to the one that exists today toward chemical weapons. However, it is important to note, this reality is highly influenced by the experiences of World War I — widespread devastation due to the use of weaponized chemicals has put a sour taste in the mouths of many Western nations.

We should hope that it does not take a similar catastrophe to bring about a consensus on autonomous weapons.

The opinions expressed here by contributors are their own and are not the view of OpsLens which seeks to provide a platform for experience-driven commentary on today's trending headlines in the U.S. and around the world. Have a different opinion or something more to add on this topic? Contact us for guidelines on submitting your own experience-driven commentary.
Samuel Siskind

Samuel Siskind studied intelligence research at the American Military University in West Virginia. He served as a squad commander in the Israeli Defense Force (IDF) Corp of Combat Engineers, in the Corps' ground battalions and later in its Intelligence Wing at regional and divisional stations. For the past five years, Samuel has worked as a consultant and researcher on physical and information security issues for private and governmental institutions, in the US, Africa, India, and Israel. He currently lives in Jerusalem.

OpsLens Premium on CRTV.

Everywhere, at home or on the go.

SIGNUP NOW