By Stephanie Chenault; Maj. Scott Kinner, USMC (Ret.); and Maj. Kurt Warner, USA (Ret.)
The U.S. Defense Department lags the hype cycle for artificial intelligence, machine/deep learning and implementations like natural language processing by years. It needs to uncover the root causes contributing to this delay and create winning strategies to overcome institutional obstacles to get ahead of industrial partners and adversaries who are further along the adoption curve.
Possessing technology is neither deterministic nor decisive when waging war. The effective employment and deliberate application of technologies to enhance warfighting capabilities implies advantage over an adversary when suitably coupled with offensive and defensive tactics.
With the big data bang of the 2000s, a global need arose to create sophisticated computational models and deploy new tools to better understand massive volumes of information. It is now a prevailing urgency to spin international data saturation into financial gold on an industrial scale. Having tremendous purchasing power, the military is in a position to shape new technologies to its needs.
One cause for the holdup is that the military services are unprepared to make the necessary policy shifts; their beleaguered acquisition process is another cause. A third reason for the delays is a lack in unity of effort; the services appear to be competitively racing against each other rather than working collaboratively to bring artificial intelligence (AI) to market.
Cyber and AI are tightly coupled focus areas for the Defense Department. In recent months the department has completed a comprehensive new strategy and conducted a cyber posture review. It is building a Joint Artificial Intelligence Center to guide the planning and execution of national priorities, including enterprise-worthy, high-budget efforts to apply AI to a cluster of joint challenges.
It is paramount the information technology workforce—from leaders to coders—be conversant about how AI works, as well as how it is evolving and can be employed effectively. Currently, the military runs a considerable risk of paradoxically squandering resources by pursuing activities it cannot do or cannot yet do while failing to prepare to exploit those capabilities at which it excels.
A thoughtful, studied process is required to conduct AI preparation of the battlefield. This entails rebooting the data discipline from source to sink, a task with little intrinsic appeal. Unfortunately, this requirement to reboot is occurring when the military has just begun to realize its return on investment in data management tools and outcome-driven business processes and has adopted an emphasis on portfolio performance goals. However, it is an approach that will train data to be ready to work for the military.
Several potential methods could be applied to this project. One is to leap straight into a turnkey solution, customize commercial offerings or buying the black box. Any of these approaches would allow the military to fail fast, learn and adjust. However, tax-spending buyers cannot afford the reputational hit associated with not being prepared. Even smart computational algorithms overwhelmingly fail to fire for effect because of missing or bad data.
Ill-defined processes and process-controlled boundary conditions also can affect the success of commercial products. AI readiness reaches ignition velocity when core processes are optimized to run efficiently, when enterprise data is put into context and when data delivery channels are safe from intrusion, tampering and exploitation.
A deliberate, phased introduction of augmented capabilities under semi-autonomous or human-in-the-loop supervision allows the military to not fall into the familiar trap of purchasing capabilities without being fully prepared to leverage them.
To take full advantage of AI, the Defense Department must accept certain precepts. AI is not as smart as the department wishes or fears it is and requires customers to determine and supply a great deal of clean data. In addition, if the military waits for AI to mature or for the cost of AI experts to drop, it is already behind the curve for successful implementation and usage. Finally, for AI to be successful, a relatively stable structure for its applications is required.
The department cannot just buy AI technologies to gain an advantage over adversaries without first addressing the large number of policies, processes and cultural factors that will impact its deployment and employment. Military conservatism makes the services leery of moving away from proven traditional technologies. In addition, the same military culture that prizes authority and intuitive decision making can dampen any technologies that challenge traditional authorities and the veracity of intuition.
AI, and specifically narrow AI, acts intelligently when it is unleashed, leading users in unanticipated directions to reveal problems that require additional study. Because it can make appropriate choices given its perceptual and computational limitations, new frameworks must be put into place to organize thoughts and expectations around tasks for which the department requires computer-aided assistance.
Using AI to prepare the battlefield requires the Defense Department to determine tasks it might need an artificial intelligence agent to accomplish. The department must consider the data sources currently available to teach the proposed AI capability about the knowledge domain it wishes to apply it to, and then verify these sources exist and are complete enough to be useful.
The military will need to establish performance metrics for AI, including the expected answers, courses of action, or actions an AI could provide or perform that would be in an acceptable range. In addition, the department must develop a task list that includes the multitude of AI tools that must be created and applied to certain discreet tasks.
To prepare the battlespace using AI effectively, Defense Department leadership must consider the doctrinal changes needed to leverage the narrow AI solutions the department creates. In addition, the military must employ emerging AI technologies in bounded situations where humans can provide feedback and assist in determining when AI usage is successful.
The military also must address unmanned vehicle systems (UVSs) issues. Some UVSs involve robotics, while many of them still have a person in the loop. Throwbots, micro and nano devices, swarms and loitering munitions all appear in military chitchat without the necessary understanding of the mechanics and engineering behind them. AI on the battlefield doesn’t have to be SKYNET-enabled slaughterbots. It can help the military modernize in more subtle, elegant technical areas, including network healing, document exploitation, autonomous health monitoring, problem-seeking cyber hardening, sovereign identification and predicative geopositioning.
AI practitioners and influencers should develop the constructs for evaluating the soft power of AI while considering the ethical, philosophical and humanitarian constraints, including relevant treaties, rules of engagement and conventions. This assessment can develop in parallel with state-of-the-market AI technologies today.
The Defense Department is still in the process of improving procedures and perfecting electronic warfare pattern analysis. The autonomy that applies to weapon systems is already tightly controlled—by effectiveness not by ethics—and AI is unlikely to turn the warfighting forces into slaughterbots anytime soon.