Home / OPINION / Analysis / The Dangers of Weaponizing Artificial Intelligence

The Dangers of Weaponizing Artificial Intelligence

Matthew Kovacev

AI technology has brought many benefits to the world, but could weaponizing AI be a mistake?

Many advancements in AI technology have been made in recent years, especially in the military. While most of these advancements have come in the form of reconnaissance and defense, AI is also being used on the attacking end. This raises significant ethical concerns due to the limitations of AI technology, specifically in understanding context and the difference between correlation and causation. Additionally, the efficacy of AI is questioned due to these limitations. These dilemmas in integrating AI into the military have led to much skepticism and hesitancy. These concerns are very valid, as the limitations of AI and its propensity to error can potentially have a massive financial and human cost.

As it exists today, AI is by no means perfect. AI technology currently has many limitations, including the fact that AI is narrowly focused and often has trouble with detecting context. This means that currently, an AI-controlled missile defense system would have trouble differentiating a regular missile and one with a nuclear warhead. In battle, an AI-controlled robot dog would also not be able to differentiate between a soldier and a civilian. An error by these AI systems can have devastating consequences.

AI does not have any sense of emotions or morals outside of their programming. This means that AI would have no knowledge of international laws or customs, only acting based on what they were programmed to do. A human would know that killing civilians would be a war crime and thus morally wrong. An AI-controlled drone would not be able to do such a thing. That is why drone strikes often require human intervention to detect targets and reduce the risk of collateral damage. The risk of unnecessary death and destruction is already significant with humans. The inclusion of AI technology would only exacerbate these risks.

It is extremely easy to fool AI right now. AI uses patterns to recognize basic objects, such as a stop sign. All it takes is a sticker with a certain pattern on it to fool AI into interpreting something entirely different. This exploit can be taken advantage of on the battlefield, as the enemy can potentially use stickers on military vehicles so that the AI detects regular cars or trucks. Because it is so easy to fool AI, this makes the technology ineffective and easy to abuse. Thus, it might not be the best idea to put AI on the battlefield.

Because of AI’s current limitations, current AI applications in the military would most likely take place outside of the battlefield. A soldier fighting today is unlikely to be faced with AI-controlled robots or drones at the present moment. However, since the U.S., China, and Russia are developing and investing in AI technology, this may soon change. Unfortunately, Russia has proven to be rather unpredictable in recent months, so it is likely that this AI technology can potentially be abused by Moscow in horrific ways.

AI is a fantastic technology with a bright outlook. It is rather versatile and has potential applications in various fields, namely in the military. However, since there are many issues and limitations with AI as it currently exists, its use in military combat would be catastrophic. Despite this, AI could possibly yield excellent results in defense and reconnaissance, so long as it is aided by human intervention. AI is a relatively new technology that often requires humans to function properly. As long as humans have a say in AI, there should be less room for error and more room for improvement. Hopefully when these limitations in AI technology are improved, we could see the implementation of more effective and more ethical technology both within and outside of the battlefield.