Neural networks, predictive analytics, and artificial intelligence have the potential to greatly advance society in many ways. By applying algorithms to massive data sets, these technology tools could improve both speed and accuracy in decision-making. But should these technological wonders be utilized everywhere? Or are there some areas that should be off limits? While artificial intelligence is superior in many ways, its use raises some ethical concerns. This is particularly relevant in terms of the use of artificial intelligence in military decision-making.
While there have been some connections between AI and the military, its role has been limited. But that looks to be changing as the use of artificial intelligence in military settings may be increasingly considered. Specific events in recent years, especially those involving mass casualties, have fueled notions about its potential benefits. Proponents see the use of AI and military outcomes being a move in the right direction. But others are worried such use could lead to not only bad results but unethical practices overall. Given that the military looks to be on the verge of such actions, exploring this subject in greater depth is certainly worthwhile.
“DARPA envisions a future in which machines are more than just tools. The machines DARPA envisions will function more as colleagues than as tool.” – Statement released by DARPA’s AI Next Program
DARPA, AI and the Military
For decades, the Defense Advanced Research Project (DARPA) has been responsible for some amazing technological innovations. Developed under President Eisenhauer, DARPA is responsible for developing the Internet, GPS, weather satellites, and much more. In fact, DARPA has explored the use of artificial intelligence in military operations for years. But only recently has the technology surrounding AI use facilitated its advancement. This has now led to the creation of the AI Next program in 2018, which has received roughly $2 billion in funding. Thus, AI and the military has a longer history than many appreciate. And it’s about to expand significantly in the coming years.
The current program involving AI and the military is called “In the Moment.” The purpose of this program is to outsource decision-making away from military commanders and toward AI systems. Given that many military situations are complex and stressful, many support the use of artificial intelligence in military determinations. In the process, they suggest AI could remove human biases that often exist, resulting in better choices. With this in mind, DARPA plans to advance the use of artificial intelligence in military defense. In fact, over 60 projects are being considered in this regard. Overall, the program is expected to take place over the next 3.5 years and utilize both private and military expertise.
“[The complexities of many military situations] wouldn’t fit within the brain of a single human decision-maker. Computer algorithms may find solutions that humans can’t.” – Matt Turek, DARPA Program Manager overseeing the In the Moment program
Applications for AI Use in the Military
The impetus for exploring AI and the military use of related algorithms involves situations that pose high-levels casualty threats. These may involve small unit casualties, such as those involving special operations forces. Alternatively, mass casualty events are also being contemplated as potential use of artificial intelligence in military decision-making. And future uses might also involve situations where major disaster relief is needed. These represent the type of complex circumstances where experts believe AI and human interactions could help. Rather than overwhelming military commanders, AI would leverage large datasets to offer the best solutions.
One of the major challenges for military commanders in these types of situations is processing vast amounts of information. Without question, this is a major advantage of AI. The use of artificial intelligence in military settings already benefits from these capacities. But expanding this to high-level decisions would be new. Proponents offer examples where AI could better assess available resources in catastrophic instances. Healthcare supplies, staffing and medication information could be analyzed in relation to geographic events. In the process, this would enable better triage selections based on data instead of gut instinct. For those encouraging AI and military integrations, this reflects their ideal vision.
“We know there’s bias in AI; we know that programmers can’t foresee every situation; we know that AI is not social; we know AI is not cultural. It can’t think about this stuff.” – Sally A. Applin, Applin, Anthropologist expert in AI ethics
An Array of Ethical Concerns
When it comes to AI and the military, many have expressed worries about the choices being made by a machine. Putting algorithms in place can benefit straightforward decision-making, but what happens when moral dilemmas exist. For example, would civilians or military be prioritized in terms of triage when injuries are comparable? While the use of artificial intelligence in military situations is supposed to eliminate bias, this may not be the case. Its use in other situations has certainly demonstrated the presence of built-in bias. Thus, using a machine may not be any better than allowing those who constructed it to make choices.
The use of artificial intelligence in military decision-making takes away the advantages of common sense and instinct. These human decision-making strategies are based on experience rather than hard facts and figures. And they’re certainly prone to error. But that doesn’t mean algorithms and datasets are always better either. Sometimes, facts may need to be ignored. In addition, using a machine to make complex choices poses problems with responsibility and accountability. If an outcome is poor, who’s to blame? Does the buck stop at the top or with the coder and algorithm experts? These issues as well as an over-reliance on AI to always make the final determination are serious concerns that need further exploration.
AI and the Military – An Adjunct, Not an Authority
When it comes to using artificial intelligence in military settings, there’s no doubt advantages exist. But it’s important to remember that AI is a technology and not a final decision-maker. As such, the ability of AI to effectively prioritize choices based on value, virtue, and experience is limited. It’s capacity to make decisions without bias has also not been clearly demonstrated. Ultimately, complex decisions are just that…complex. And we shouldn’t allow machines to usurp our responsibility to make them. Like many other technologies before it, AI must be implemented with human oversight. This is particularly true for military situations where life and death determinations are all too frequent.