Artificial Intelligence and Concerns about its Military Applications

After decades of being mostly relegated to science fiction, Artificial Intelligence (AI) today is more and more an undeniable reality. Whether it’s a flashy example, like IBM’s Watson outsmarting humans on Jeopardy, or a more subtle example, like intelligent software agents augmenting existing systems to improve performance, it’s clear that AI has become a part of our daily existence.

As has always been the case with AI, along with excitement over computers performing human-like functions, comes a certain amount of concern. It’s not surprising that the recent advances in AI have generated an increase in voiced caution, particularly regarding AI’s military applications. Concerns have largely focused on weaponized drones, or autonomous systems, directed by AI, rather than humans. I believe some degree of concern is warranted, but a general unfamiliarity with AI may be leading to unwarranted concern.

Much has been made of prominent scientists’ warnings related to AI, but these scientists aren’t advocating for some kind of impossible reversal of the AI technological tide. Their concerns revolve around human control. The summary of a July 28, 2015 open letter signed by nearly three thousand AI and Robotics researchers (including luminaries like Steven Hawking and Elon Musk) states that the signers “believe that AI has great potential to benefit humanity in many ways,” but goes on to recommend a “ban on offensive autonomous weapons beyond meaningful human control.” This is sound advice. Like other weapons (nuclear, chemical, biological or even traditional mines), AI-enabled weaponry demands international agreement and control on its use.

That said, the potential power of AI in decision support and planning cannot be overestimated. By reducing the fog of war and strengthening situational understanding, it could provide unrivaled command and control (or mission command) capability. The solutions that cognitive computing enables can allow people and computers to build better plans, faster and in greater detail, and then execute those plans more effectively with continuous monitoring and assessment. The partnership of man’s creativity, insight, and complex pattern recognition combined with the computer’s ability to manage massive amounts of knowledge, work through every last detail, and monitor everything continuously is a powerful capability. Through the prudent application of AI, this capability is within reach.

The AI community, as well as policy makers around the world, should contemplate how to develop this technology in a responsible and ethical way. Concern is warranted, but the great benefits of AI technology, both in terms of the peace attainable through the deterrence of a dominant military capability and the multitude of quality of life improvements that AI offers, can be secured while addressing these concerns.

Policy makers should immediately begin the process of obtaining an international consensus that mandates human control of AI-enhanced weapon systems. DoD will need to include provisions for maintaining human control of autonomous systems. Finally, while we are still a decade or more away from General AI and its potential threat is still nebulous, now is the time to start the dialog that will determine the safeguards and mechanisms that will ensure the prudent control of AI technology, even as AI enabled systems rapidly transform our lives for the better.

There is significant confusion about a number of AI concepts. If you’d like to read more about technological concepts related to AI, read the full article for a more thorough treatment of this issue.

dr_ToddCarrico About the Author: Dr. Todd Carrico is the President and CEO of Cougaar Software, Inc. and a recognized expert in cognitive computing and intelligent distributed systems.