Does Rising Artificial Intelligence Pose a Threat?

By Scot A. Terban

Date Originally Published: February 18, 2019.

Summary: Artificial Intelligence or A.I. has been a long-standing subject of science fiction that usually ends badly for the human race in some way. From the ‘Terminator’ films to ‘Wargames,’ an A.I. being dangerous is a common theme. The reality though is that A.I. could go either way depending on the circumstances. However, at the present state of A.I. and it’s uses today, it is more of a danger than a boon in it’s use on the battlefield both political and militarily.

Text: Artificial intelligence (A.I.) has been a staple in science fiction over the years but recently the technology has become a more probable reality[1]. The use of semi-intelligent computer programs and systems have made our lives a bit easier with regard to certain things like turning your lights on in a room with an Alexa or maybe playing some music or answering questions for you. However, other uses for such technologies have already been planned and in some cases implemented within the military and private industry for security oriented and offensive means.


The notion of automated or A.I. systems that could find weaknesses in networks and systems as well as automated A.I.’s that have fire control on certain remotely operated vehicles are on the near horizon. Just as Google and others have made automated self-driving cars that have an A.I. component that make decisions in emergency situations like crash scenarios with pedestrians, the same technologies are already being talked about in warfare. In the case of automated cars with rudimentary A.I., we have already seen deaths and mishaps because the technology is not truly aware and capable of handling every permutation that is put in front of it[2].

Conversely, if one were to hack or program these technologies to disregard safety heuristics a very lethal outcome is possible. This is where we have the potential of A.I. that is not fully aware and able to determine right from wrong leading to the possibility for abuse of these technologies and fears of this happening with devices like Alexa and others[3]. In one recent case a baby was put in danger after a Nest device was hacked through poor passwords and the temp in the room set above 90 degrees. In another instance recently an Internet of Things device was hacked in much the same way and used to scare the inhabitants of the home with an alert that North Korea had launched nuclear missiles on the U.S.

Both of the previous cases cited were low-level attacks on semi dumb devices — now imagine one of these devices with access to weapons systems that are networked and perhaps has a weakness that could be subverted[4]. In another scenario, such A.I. programs as those discussed in cyber warfare, could also be copied or subverted and unleashed not only by nation-state actors but a smart teen or a group of criminals for their own desires. Such programs are a thing of the near future, but if you want an analogy, you can look at open source hacking tools or platforms like MetaSploit which have automated scripts and are now used by adversaries as well as our own forces.

Hackers and crackers today have already begun using A.I. technologies in their attacks and as the technology becomes more stable and accessible, there will be a move toward whole campaigns being carried out by automated systems attacking targets all over the world[5]. This automation will cause collateral issues at the nation state-level in trying to attribute the actions of such systems as to who may have set them upon the victim. How will attribution work when the system itself doing the attacking is actually self-sufficient and perhaps not under the control of anyone?

Finally, the trope of a true A.I. that goes rogue is not just a trope. It is entirely possible that a program or system that is truly sentient might consider humans an impediment to its own existence and attempt to eradicate us from its access. This of course is a long distant possibility, but, let us leave you with one thought — in the last presidential election and the 2020 election cycle to come, the use of automated and A.I. systems have and will be deployed to game social media and perhaps election systems themselves. This technology is not just a far-flung possibility, rudimentary systems are extant and being used.

The only difference between now and tomorrow is that at the moment, people are pointing these technologies at the problems they want to solve. In the future, the A.I. may be the one choosing the problem in need of solving and this choice may not be in our favor.

Related Posts

Subscribe Our Newsletter