Autonomous Weapons Systems: Navigating Ethics in the Age of AI-Driven Warfare
Views & reviews: essays and book reviews
The rise of autonomous weapons systems (AWS) has ignited a profound debate within the realms of ethics, international law, and public policy. Autonomous weapons, also known as lethal autonomous weapons systems (LAWS), are machines capable of identifying, selecting, and engaging targets without human intervention. These systems employ algorithms for image recognition, pattern analysis, and decision-making, often using techniques like neural networks and deep learning. For example, they may integrate real-time data from multiple sensors, such as radar and infrared imaging, to classify and prioritize targets with minimal latency. These systems are programmed through advanced algorithms and machine learning models, which allow them to process sensor data, recognize patterns, and make decisions in real time. For instance, they might analyze visual or thermal imaging to classify objects as threats or non-threats, leveraging vast datasets to refine their decision-making capabilities over time. The potential deployment of these systems raises questions about accountability, morality, and the future of warfare. This essay explores the ethical implications of AWS, drawing on examples of existing technologies and the insights of influential ethicists.
Understanding Autonomous Weapons Systems
Autonomous weapons systems encompass a range of technologies designed to operate independently, leveraging advancements in artificial intelligence (AI), robotics, and machine learning. Unlike traditional weapons that require human operators, AWS are programmed to analyze their environment, make decisions, and execute actions.
Examples of such systems include:
Loitering Munitions: Drones like the Israeli Harop and the U.S. Switchblade can hover over a battlefield and autonomously attack targets when certain conditions are met, such as the detection of a pre-defined enemy vehicle or the confirmation of a hostile threat through multiple sensor inputs. These determinations, however, raise ethical concerns about reliability. Misclassifications—caused by biases in training data or limitations in sensor accuracy—could lead to catastrophic consequences, such as targeting civilians or neutral assets. Ethicist Ronald Arkin has argued that autonomous systems, if properly designed, could potentially reduce such errors compared to humans, as they lack emotions like fear and anger that often exacerbate human mistakes. Exploring whether this claim mitigates these concerns could provide a nuanced perspective on AWS reliability. The opaque nature of machine decision-making further complicates accountability and trust in these systems.
Unmanned Ground Vehicles (UGVs): Systems like Russia's Uran-9 are equipped with weaponry and can navigate combat zones autonomously.
Naval Systems: The U.S. Navy's Sea Hunter, an autonomous vessel, can patrol waters and potentially engage in combat without direct human control.
AI-Powered Targeting Systems: Platforms that use facial recognition and pattern analysis to identify and neutralize threats autonomously.
While these systems promise operational efficiency and reduced human casualties among combatants, they also present ethical challenges that warrant scrutiny.
Ethical Concerns Surrounding AWS
Accountability and Responsibility
One of the central ethical dilemmas surrounding AWS is the question of accountability. For example, in 2020, reports surfaced of a Turkish-made Kargu-2 drone allegedly engaging a target autonomously in Libya, raising questions about who should bear responsibility for its actions—the manufacturer, the deploying state, or the commanding officers. Similarly, hypothetical scenarios, such as an AWS misidentifying civilians as combatants due to flawed algorithms, highlight the potential for severe consequences and the difficulty in attributing blame. If an autonomous system mistakenly targets civilians or violates the laws of war, who is to blame? As philosopher Peter Asaro notes, "Autonomous weapons shift the moral burden of decision-making from human beings to machines, creating a dangerous gap in accountability."
Traditional frameworks for military responsibility rely on human agents—soldiers, commanders, and policymakers. AWS disrupt this chain, leading to potential scenarios where no clear entity is held accountable. This undermines fundamental principles of justice and deterrence, as articulated by Michael Walzer in Just and Unjust Wars: "The morality of war demands that responsibility for violence must always lie with those who wield it."
The Principle of Discrimination
A core tenet of just war theory is the principle of discrimination, which requires distinguishing between combatants and non-combatants. AWS, however, rely on algorithms and sensors to make these distinctions, raising concerns about reliability and bias.
Ethicist Shannon Vallor has argued that "the lack of moral intuition in machines makes them inherently incapable of exercising the discernment required to respect human dignity in complex scenarios." Nick Bostrom adds another layer to this ethical discussion by warning about the unintended consequences of AI decision-making, stating, "The challenge is not only designing systems that work as intended but ensuring that unintended actions do not lead to catastrophic outcomes." A striking example is the alleged use of autonomous drones in Libya, where a Kargu-2 drone reportedly targeted a human autonomously. This incident underscores the catastrophic consequences that can arise when machines act without the ethical discernment and contextual judgment inherent in human decision-making. By 'moral intuition,' Vallor refers to the human capacity to understand ethical nuances, such as empathy, contextual judgment, and cultural sensitivity, which machines inherently lack. For example, an AWS might not interpret a soldier's act of surrender correctly, leading to unintended harm. Similarly, it may fail to consider the moral weight of collateral damage in densely populated areas, underscoring the limitations of algorithmic decision-making in morally fraught situations. Even the most advanced AI systems struggle with nuanced decisions, such as recognizing surrender or avoiding cultural landmarks.
The Risk of Proliferation and Escalation
The deployment of AWS could lower the threshold for entering conflicts. States may be more inclined to engage in warfare if their personnel are not at risk. Moreover, the proliferation of AWS to non-state actors or rogue regimes could destabilize global security.
Immanuel Kant’s categorical imperative provides a valuable lens here: "Act only according to that maxim whereby you can at the same time will that it should become a universal law." For instance, consider a scenario where an AWS makes a decision to engage based on incomplete or biased data. Such actions could set a dangerous precedent, normalizing the use of flawed decision-making processes in lethal operations. This contradicts Kant’s imperative, as it risks creating a universal norm where unreliable autonomous decisions are accepted, undermining both justice and human dignity. The unregulated spread of AWS contradicts this imperative, as it could lead to a world where autonomous killing machines become normalized.
Dehumanization of Warfare
The use of AWS risks further detaching humanity from the consequences of war. As Hannah Arendt warned in The Human Condition: "The banality of evil lies in the inability to think from the standpoint of others." AWS, devoid of empathy and moral reasoning, exacerbate this detachment, reducing life-and-death decisions to algorithmic computations.
Ethical Arguments in Favor of AWS
Proponents of AWS often argue that these systems can adhere to the laws of armed conflict more consistently than humans. Machines do not act out of fear, anger, or fatigue, which are common sources of human error in warfare. Critics like Wendell Wallach, however, counter that the absence of human emotions also means a lack of empathy, a crucial element in making ethical decisions during warfare. This deficiency could lead to dehumanized conflict scenarios, where the moral weight of life-and-death decisions is reduced to mere algorithmic calculations. For example, a well-designed AWS could potentially minimize collateral damage and civilian casualties.
John Stuart Mill’s utilitarian philosophy offers a framework for this argument: "The right action is the one that produces the greatest happiness for the greatest number." However, utilitarianism faces limitations in this context. For instance, while AWS might reduce immediate harm to soldiers and civilians, the long-term consequences of their deployment—such as eroding accountability, enabling rapid escalation of conflicts, and normalizing autonomous killing—could outweigh these short-term benefits. This perspective calls for a more nuanced evaluation of whether AWS truly maximize happiness or simply redistribute harm in less visible ways. By reducing harm to civilians and soldiers, AWS could theoretically align with utilitarian principles. However, this perspective assumes a level of perfection in machine ethics that remains elusive.
Regulatory and Philosophical Considerations
To address the ethical concerns of AWS, several frameworks and regulations have been proposed. The United Nations Convention on Certain Conventional Weapons (CCW) has hosted discussions on banning or regulating AWS, emphasizing the importance of maintaining "meaningful human control."
Philosopher Philippa Foot’s trolley problem is often invoked in this context: Should a machine sacrifice one person to save many? This thought experiment directly applies to AWS decision-making, where programming must account for scenarios involving trade-offs between human lives. For instance, an autonomous drone may face a situation where collateral damage is unavoidable to neutralize an imminent threat. Unlike humans, who can consider the broader ethical and emotional dimensions, AWS rely solely on predefined algorithms that may not adequately capture the complexities of such decisions, raising questions about whether their actions can truly reflect moral judgment. While this thought experiment illustrates the moral complexity of AWS, it also highlights the inadequacy of binary logic in ethical decision-making.
Martha Nussbaum’s capabilities approach underscores the need to prioritize human dignity and flourishing over technological expediency. Nussbaum writes, "Ethical action must enhance the capabilities that enable people to live fully human lives." AWS, with their potential for indiscriminate harm, may hinder rather than enhance these capabilities.
Conclusion
Autonomous weapons systems represent a paradigm shift in modern warfare, offering both opportunities and profound ethical challenges. As technologies advance, the imperative to address these challenges becomes more urgent. Drawing on the insights of ethicists like Kant, Walzer, and Vallor, society must grapple with questions of accountability, discrimination, and the dehumanization of conflict.
A path forward may lie in a combination of regulation, technological safeguards, and international cooperation. Ensuring meaningful human control and prioritizing ethical principles over strategic expediency are essential steps toward a future where the use of AWS aligns with humanity’s moral values.
As Arendt reminds us, "The task of thinking is not to unearth definitive answers but to remain vigilant against the forces that strip us of our humanity." In confronting the ethical challenges of autonomous weapons systems, we face a defining test of our values as a global society. Will we harness technology to uphold justice and human dignity, or will we allow it to redefine warfare in ways that compromise the moral principles we hold dear? The choices made today will echo through history, shaping not only the conduct of war but the essence of humanity itself.
(The author received a B.A. degree in Philosophy from The Johns Hopkins University.)





The limitations of algorithmic systems and their moral use, (especially if misapplied immorally to vitally complex, “real world” aspects of human life), have already been made manifest in the perverted results of American elections. These were engineered by the immoral exploitation of technological tools, to disseminate and weaponize lying and systematic disinformation, in order to manipulate voters, individually and at scale.
If we want morality anywhere to any extent in our world, we must always rely on living, breathing people. Even humans struggle to attain it. Machines never will nor can they.
The late Rabbi, Lord Jonathan Sacks, member of the House of Lords and prolific writer discusses this in a book. He wrote, dealing with the fact that we have no ethics today. It’s become a world of random murders and more and more of these kinds of artificial intelligence weapons. I worry about future generations.