AI And The Future Of Warfare: The Ethical Implications Of Using AI In Warfare

The global military infrastructure is undergoing major alterations and a high rate of technological development. When the President of America, Joe Biden said,

“We’re going to see more technological change in the next 10 years than we saw in the last 50.”

He was referring to the blatantly obvious. Artificial intelligence is now developing quickly, and its use in the military has significantly altered how combat is engaged in. What was formerly thought of as science fiction is now an unraveling reality. This field of technology is altering everything, from drones and autonomous armaments to decision-making algorithms and surveillance systems.

Kai-Fu Lee, former president of Google China and CEO of Sinovation Ventures in his book “AI 2041: Ten Visions for Our Future,” said,

“Autonomous weaponry is the third revolution in warfare, following gunpowder and nuclear arms.”

In his words,

“The evolution from land mines to guided missiles was just a prelude to true AI-enabled autonomy—the full engagement of killing: searching for, deciding to engage, and obliterating another human life, completely without human involvement.”

While the benefits of AI in warfare are undeniable, there are also ethical concerns and questions about the impact of this technology on the future of military conflicts, but first, a dive into what the progress has been so far.

Progress in AI Warfare

According to a CRS (Congressional Report Service) report, both the United States and its rivals China and Russia are currently integrating narrow AI into several military applications. Some of these applications are, command and control, logistics, cyber operations, intelligence, surveillance, and reconnaissance, as well as semi-autonomous and autonomous vehicles.

Presently, one of the most significant advancements in AI warfare is the use of unmanned aerial vehicles (UAVs), often referred to as drones. These vehicles have been used extensively in military operations for reconnaissance, intelligence gathering, and targeted strikes.

During the most recent escalation of the Nagorno-Karabakh conflict in 2020, AI technology played a more prominent role. The fleet of UAVs employed in the conflict had an influx of autonomy and surveillance capabilities. Another conflict between Israel and Hamas in May 2021 known as “ Operation Guardian of the Walls,” is another example of the application of AI in warfare. The Israeli military used machine learning and supercomputing to target Hamas’s rocket launchers and tunnels. It is referred to as the first artificial intelligence war.

The continuing conflict between Russia and Ukraine has also revealed previously unseen uses of AI-powered drones in combat. While Russia is utilizing AI to improve the effectiveness of its drones and electronic warfare against Ukrainian soldiers, AI is also assisting Ukraine in self-defense by helping in detecting enemy activities and enhancing its own weaponry. Altogether, these have all offered clear evidence of how battlefields are being transformed by AI-enabled machines rolling off assembly lines around the world.

A few years ago a short dystopian movie that illustrates what a type of drone powered by AI (called slaughterbots) can do was shared by the Future of Life Institute on their website. At the time, they were mere projections. But today, these lethal AI-enhanced autonomous weapons, sometimes referred to as “killer bots” are now in use on the battlefield.

In March 2021, the United Nations documented the first instance of this bot being used in real life in Libya. In a short period that same year, The first known instance of an AI-controlled drone swarm was observed in Israel. The Israel Defense Forces (IDF) employed a swarm of little drones to find, identify, and attack Hamas militants during operations in Gaza in mid-May, according to a report by New Scientist.

In recent years, drone technology has become more sophisticated, with AI algorithms enabling them to make decisions based on real-time data analysis. According to Jyoti Sinha, CTO at Omnipresent Robot Technologies.

“In tactical warfare, AI-powered swarm drones can carry out simultaneous destruction and dismantling of multiple targets using precise swarm configurations and hover localization even when less sophistication is available with guided defense equipment…While targeting enemy frontlines, even if some of the drones are attacked, the cognitive AI algorithms kick in and reconfigure the drone network positioning to maintain situational awareness and aid in the completion of the mission.”

Some examples of these AI-enabled autonomous weapons include the Turkish-manufactured Bayraktar TB2 which has both armed attack and intelligence, surveillance, and reconnaissance (ISR) capabilities and the Zala Lancent by Kalashnikov which claims to feature “intelligent detection and recognition of objects by class and type” using computer vision algorithms.

Boeing also unveiled in 2020, what it called the First loyal Wingman Aircraft designed to fly and fight of course autonomously alongside crewed aircraft like fighter jets. Its name, Loyal Wingman, comes from the company’s description of it as a “faithful ally.”

Although these developments in the application of artificial intelligence to military technology have their benefits, there are concerns that they might lead to a loss of control over military operations and have unanticipated repercussions.

Ethical Concerns

The fast track of progress recorded in the field of military technology prompts ethical questions about their application both in warfare and outside of warfare. For instance, while artificial Intelligence can be used in surveillance and intelligence gathering, helping with identifying potential threats and preventing attacks, there is also a risk that it could be used to monitor and control civilian populations, leading to violations of human rights and privacy.

The possibility of AI systems malfunctioning or being compromised, with undesired or even disastrous results, is also one of the biggest concerns, in addition to issues with these systems’ accountability, particularly when autonomous weapons have to choose who to attack and when to do so.

In a recent document titled “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” the US State Department called on nations developing AI to employ technology in military operations in a morally and responsibly acceptable manner.

The US government claims to agree that using artificial intelligence (AI) in the military can and should be ethical, responsible, and enhance global security, therefore putting out its own framework and inviting other nations to join in agreement. On the same day the proclamation was released, a “call to action” advocating the responsible use of artificial intelligence (AI) in the military was signed by more than 60 nations, including the United States’ biggest rival, China.

However, concerns have been raised about how far the statement can go in keeping military technology within the bounds of morality because it is unclear exactly what kinds of autonomous or AI-powered systems are included in the declaration. Reuters reported that in addition to not addressing issues like AI-guided drones, “slaughterbots” that may kill without human participation, or the possibility that an AI could exacerbate a military confrontation, human rights experts and academics pointed out that the declaration was not legally binding.

Another concern is the role of humans in decision-making on the battlefield, and the potential for AI to dehumanize warfare and make it more destructive. The idea of machines having the freedom and capability to take human life is morally abhorrent, according to Secretary-General of the United Nations António Guterres. In a statement put out on the United Nation’s website on September 2018, he exclaimed,

“The impacts of new technologies on warfare are a direct threat to our common responsibility to guarantee peace and security.”

After the Russian invasion of Ukraine in February 2022, four years later, we see both the fears becoming real and the fulfillment of such prophecies beginning to creep in. According to an Insider report, the Ukraine crisis has seen more use of AI-enhanced UAVs, than in any other recent battle. As it stands, both Russia and Ukraine have been using autonomous technology against each other in the ongoing war.

Russia, for example, has been utilizing “loitering munitions” like the Russain made Lanset and Iranian Shahed-136s also called Geranium-2. Colloquially referred to as “kamikaze” drones, these munitions orbit a specified location until it detects a predetermined target category, then crashes into the object, detonating the explosive. Ukraine, on the hand, has been deploying a Turkish-made drone, Bayraktar TB2, which operates autonomously except when deciding when to drop the laser-guided bombs it carries.

Guterres believes that less monitoring of such munitions poses a great challenge, as it jeopardizes attempts to mitigate threats, prevent escalation, and uphold international humanitarian and human rights law. In an open letter to the United Nations, Tesla’s CEO, Elon Musk, and 115 other experts offered suggestions that highlighted ethical concerns about the use of these autonomous weapons. According to the letter,

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

The Future of Military Technology

In addition to the aforementioned concerns, it is unclear what effect AI technology will have on global politics and geopolitical stability in the nearest future. However, the development of military technology, according to analysts, might trigger a new arms race as governments and military organizations invest extensively in research and development in this field, usually with the goal of gaining a military advantage. Although the US, China, and Russia appear to be leading this race, the future still seems uncertain and complex.

Research by M. L. Cummings concurs that both military and commercial robots will eventually incorporate “artificial intelligence (AI) that could make them capable of undertaking tasks and missions on their own, but also noted that it will be many years before AI will be able to approximate human intelligence in high-uncertainty situations due to the current struggle to imbue computers with true knowledge and expert-based behaviors as well as limitations in perception sensors.

While at the Global Emerging Technology Summit of The National Security Commission on Artificial Intelligence, the US Secretary of Defense, Lloyd J. Austin remarked,

“AI is central to our innovation agenda, helping us to compute faster, share better, and leverage other platforms. And that’s fundamental to the fights of the future.”

The US Department of Defense, he stated, will invest an estimated $1.5 billion dollars over the next five years in an effort to accelerate the adoption of AI. Experts, however, believe that the US military is advancing slowly, particularly if the aim is to stay up with China, which according to report, is pouring more than $1.6 billion into military technology on a yearly basis. China declared in 2017 that it intended to become the global leader in artificial intelligence (AI) by 2030, and it now does so in terms of publications and research patents in the field.

Russia, on the other hand, has declared AI a national priority and has launched several initiatives to develop and deploy AI applications. Russia’s president, Vladimir Putin via live video said,

“Artificial intelligence is the future, not only for Russia but for all humankind, whoever becomes the leader in this sphere will become the ruler of the world.”

Although it spends less on defense than the US and China, Russia has set a goal of having 30 percent of military automated equipment by 2025 as part of a defense modernization program that was initiated in 2008.

There are different opinions and predictions about who will win the AI arms race, but ultimately, both that and the future of AI as it applies to warfare, in general, is not clearly predictable. However, the distinction between science fiction and practical reality may have begun to fade. Analysts expect that artificial intelligence technology will drastically increase military power, and these will likely shape the nature and dynamics of conflict in the coming years. Whether AI will usher us into an age of utopia or a dystopia, only time will tell.