top of page

Implications of AI Use in Militaries

The field of artificial intelligence continues to expand and the range of its applications is growing, affecting virtually every sector of society. The power of AI is evident in healthcare, where it aids in diagnosing diseases with greater accuracy and personalized treatment plans. In customer support, AI-driven chatbots provide round-the-clock assistance, enhancing user experience and operational efficiency. The military sector is also using AI for defense and strategic operations, potentially redefining modern warfare. Yet, as we marvel at the various ways AI can be harnessed, we must also consider the full scope of its impact. The rapid deployment of AI technologies brings critical ethical considerations and potential risks.


AI in Militaries

In the realm of military applications, the integration of AI raises many questions. The idea of algorithms having the capacity to influence or determine life-and-death decisions causes significant unease and controversy. The autonomy of AI in critical decision-making scenarios, such as the identification and engagement of targets, necessitates a strict examination of the moral and legal implications. The use of AI in military contexts is not just about efficiency or technological superiority; it is also about the values and principles that govern the use of force. The delegation of lethal decision-making to machines challenges long-standing norms of military warfare. The concept of an algorithm determining the fate of human beings without direct human oversight is at odds with the principles of human rights and international humanitarian law, which emphasize the importance of human judgment, especially in matters of life and death. Additionally, the potential for AI systems to act on incomplete or biased data sets introduces the risk of unintended consequences, including civilian casualties and escalations of conflict. 


Dangers of Singularity

The reliability of AI in combat environments is also a concern, as these systems may not be able to fully comprehend the nuances and unpredictability of human behavior or the ethical considerations of the battlefield. There is also the possibility of a full loss of human control over AI systems, often referred to as the "singularity.” While some argue that AI will always remain under human control, others worry that advanced AI systems could become too complex to predict or manage, leading to scenarios where machines operate beyond the intended constraints set by their creators.[1] For example, the use of Uninhabited Combat Aerial Vehicles (UCAVs) in Australia has raised questions about the ability of these AI-controlled vehicles to perform missions currently carried out by manned aircraft.[2] The enthusiasm for UCAVs is largely driven by budgetary pressures and the perceived lower cost of ownership and operation. However, the technology faces significant obstacles, including the challenge of replicating the situational awareness and decision-making capabilities of human pilots.


Bias in AI Use in Militaries

The issue of bias is one of the main concerns when considering the application of AI in military contexts. This concern arises from the potential for AI systems to either mitigate or exacerbate human biases in decision-making processes. On one hand, AI has the potential to significantly reduce human bias by employing statistical-based decision-making processes. This is a significant advantage, as human decision-making, particularly in high-pressure situations such as those often encountered in military environments, can be influenced by a variety of factors that can lead to biased outcomes. For example, a military commander might be influenced by past experiences of conflict, leading to decisions that are overly cautious or, conversely, overly aggressive. Similarly, emotions such as fear or anger can cloud judgment and lead to decisions that are not based on a clear and objective assessment of the situation. AI, with its ability to process vast amounts of information rapidly and without emotional influence, can help mitigate these human biases. It can provide a more objective analysis of the situation, based on a wide range of data points, and can do so much more quickly than a human could. This can help to ensure that decisions are made based on the most accurate and up-to-date information available, rather than being influenced by individual biases or emotions. 


While AI can help to potentially reduce human bias, it is not immune to bias itself. AI systems are trained on large datasets, and if these datasets contain biased information, the AI system can learn and replicate these biases. This is a significant concern, as biased AI systems can lead to discriminatory outcomes. An AI system trained on historical military data might learn to associate certain countries or groups with hostility, leading to biased decisions in future conflicts.


Transforming Military Operations

AI can take on military tasks that may pose significant risks to human life. For instance, AI can be integrated into autonomous vehicles and drones, reducing the need for human presence in hazardous environments. This application of AI could significantly decrease the number of human casualties in conflict zones, as machines, rather than humans, would be exposed to the direct line of fire. However, this technological advancement raises critical questions about the future of warfare. Could we be heading towards a scenario where a handful of individuals, safely lodged in buildings around the world, have the power to cause mass destruction at the mere press of a button? This hypothetical underscores the potential for AI to drastically alter the way we conduct military operations. The use of AI in military operations could lead to a new form of warfare, where battles are fought remotely, and algorithms make decisions. This could result in a significant shift in military strategy, as traditional concepts of frontline and rear areas may become obsolete. Such a shift can affect not only military tactics but also the ethical and legal frameworks that govern warfare. AI has already been integrated into military operations in various ways. For example, Project Maven, a U.S. Department of Defense initiative, uses AI to interpret video images and provide insights to military analysts, reducing the human workload and increasing decision-making speed.[3] [4] AI in military operations could also potentially escalate conflicts. With AI systems capable of executing operations at unprecedented speed and scale, there is a risk that minor skirmishes could quickly escalate into full-blown conflicts. [5]


Who is Responsible?

The issue of accountability in the context of artificial intelligence systems in military use is a hard one. If an AI system makes a critical error, the question of who bears the responsibility is not straightforward, particularly in high-stakes scenarios where the consequences of such errors can be severe. For instance, consider an AI-powered weapons system that mistakenly identifies civilians as targets. The repercussions of such a mistake are grave, raising urgent questions about accountability. The responsibility could potentially lie with various parties. One could argue that the creators of the model bear the responsibility because they are the ones who designed and built the system, and any flaws in its operation could be traced back to errors in its design or implementation. However, this perspective assumes that the creators have full control over how their model is used, which may not always be the case. Alternatively, the responsibility could fall on the military operators who deploy the AI system. They are the ones who choose to use the system in a real-world context, and they are presumably aware of the potential risks. However, this perspective assumes that the operators have a deep understanding of the AI system's inner workings, which may not be the case given the complicated nature of these systems. The lack of a clear answer to this question shows the need for robust legal and ethical frameworks to guide the use of AI systems in militaries. These frameworks should clearly define the responsibilities of all parties involved and provide mechanisms for holding them accountable.







[2] Laird, R. (2020, September 10). The Australian Army Pursues Un-crewed Armored Vehicles. Defense.info. https://defense.info/multi-domain-dynamics/2020/09/the-australian-army-pursues-un-crewed-armored-vehicles/


‌[3] Frisk, A. (2018, April 5). What is Project Maven? The Pentagon AI project Google employees want out of. Global News; Global News. https://globalnews.ca/news/4125382/google-pentagon-ai-project-maven/


[4] Institutul Național de Cercetare-Dezvoltare în Informatică – ICI București, România, & Botezatu, U.-E. (2023). AI-Centric secure outer space operations. BULLETIN of “CAROL I” NATIONAL DEFENCE UNIVERSITY, 12(3), 205–221. https://doi.org/10.53477/2284-9378-23-44


[5] Scharre, P. (2023, February 28). “Hyperwar”: How AI could cause wars to spiral out of human control. Big Think. https://bigthink.com/the-future/hyperwar-ai-military-warfare/





bottom of page