
https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-wea...
Autonomous weapons systems present numerous risks to humanity, most of which infringe on fundamental obligations and principles of international human rights law. Such systems select and engage targets based on sensor processing rather than human inputs. The threats they pose are far reaching because of their expected use in law enforcement operations as well as during armed conflict. Given that international human rights law applies during both peacetime and wartime, it covers all circumstances relevant to the use and development of autonomous weapons systems.
This report examines how autonomous weapons systems contravene different human rights obligations and principles. It builds on the 2014 publication by Human Rights Watch and Harvard Law School’s International Human Rights Clinic (IHRC) entitled Shaking the Foundations: The Human Rights Implications of Killer Robots and expands upon it to address three additional rights obligations and principles.[1]
Human Rights Watch is a co-founder of Stop Killer Robots, a campaign of 270 civil society organizations. Together they and IHRC are working for a new international treaty that ensures meaningful human control over the use of force and avoids digital dehumanization. Such an instrument should prohibit autonomous weapons systems that inherently operate without meaningful human control or that target people. Regulations should ensure that all autonomous weapons systems not covered by the prohibitions operate only with meaningful human control.
States and other stakeholders have examined the challenges raised by autonomous weapons systems and ways to address them for more than a decade. They have primarily approached this topic through an international humanitarian law lens because discussions have taken place in meetings of the Convention on Conventional Weapons (CCW). Nevertheless, participants in that forum and others have recognized the applicability of international human rights law and expressed concerns that the use of autonomous weapons systems may violate it. This report aims to provide a much more in-depth analysis of the issue.
The development and use of autonomous weapons systems implicate at least six core obligations and principles of international human rights law:
-
Right to Life: The right not to be arbitrarily deprived of life requires that the use of force be necessary to achieve a legitimate aim and be applied in a proportionate manner. The right also requires that lethal force in particular may only be used as a last resort to protect human life. Autonomous weapons systems would face serious difficulties in meeting this three-part test. Obstacles include limitations in current technology and other technical limitations that suggest automated systems would never be able to approximate or surpass distinctly human abilities at completing certain types of tasks. Autonomous weapons systems could not identify subtle cues of human behavior to interpret the necessity of an attack, would lack the human judgment to weigh proportionality, and could not communicate effectively with an individual to defuse a situation and ensure that lethal force is a last option. As a result, their use of force would be arbitrary and unlawful.
In situations of armed conflict, international humanitarian law’s rules of distinction and proportionality can be used to determine what is “arbitrary” under the right to life. Although the specific rules are different, autonomous weapons systems would face similar challenges complying with international humanitarian law’s rules on the use of force in armed conflict as it does with international human rights law’s rules in peacetime.
-
Right to Peaceful Assembly: The right to peaceful assembly, which is particularly relevant to the use of force in law enforcement situations, is essential to democracy and the enjoyment of other human rights. The use of autonomous weapons systems would be incompatible with this right. The systems, which would lack human judgment and could not be pre-programmed or trained to address every situation, would find it challenging to draw the line between peaceful and violent protesters. Force may only be used in exceptional circumstances to disperse assemblies that are unlawful or violent. Autonomous weapons systems, which apply force by definition, would be unlikely to have the capability to accurately assess when and how much force is appropriate. Finally, the use or threat of use of autonomous weapons systems, especially in the hands of abusive governments, could strike fear among protesters and thus cause a chilling effect on free expression and peaceful assembly.
-
Human Dignity: The principle of human dignity underlies all human rights, including the right to life, and establishes that people have inherent worth that is both universal and inviolable. Autonomous weapons systems would contravene that foundational principle due to their process of making life-and-death determinations. These machines would kill without the uniquely human capacity to understand or respect the true value of a human life because they are not living beings. Furthermore, they would instrumentalize and dehumanize their targets by relying on algorithms that reduce people to data points.
-
Non-discrimination: The principle of non-discrimination calls for the protection and promotion of human rights for all people, irrespective of race, sex and gender, ability, or other status under the law. Autonomous weapons systems would likely be discriminatory for multiple reasons. For example, biases of developers, including in their programming or choice of training data, could influence a system’s design and later decision-making. Once an autonomous weapon system using artificial intelligence (AI) is deployed, insufficient understanding of how and why the system makes determinations could prevent a human operator from scrutinizing recommended targets and intervening to correct errors before force is applied. As shown by other AI technology, algorithmic bias can disproportionately and negatively affect already marginalized groups. This report explores the potential differentiated effects of autonomous weapons systems on people of color, men and women, and persons with disabilities.
-
Right to Privacy: The right to privacy protects people from unlawful or arbitrary interferences in their personal life from the beginning of autonomous weapons systems’ creation. The development and use of autonomous weapons systems could violate the right because, if they or any of their component systems are based on AI technology, their development, testing, training, and use would likely require mass surveillance. To avoid being arbitrary, such data-gathering practices must be both necessary for reaching a legitimate aim and proportionate to the end sought. Mass surveillance fails both these requirements.
-
Right to Remedy: The right to remedy, triggered at the end of an autonomous weapon system’s lifecycle, obligates states to prosecute gross violations of international human rights law and serious violations of international humanitarian law and provide several forms of reparations. There are obstacles to holding individual operators criminally liable for the unpredictable actions of a machine they cannot understand, in particular because autonomous weapons systems that rely on AI may make determinations through opaque, “black box” processes. There are also legal challenges to finding programmers and developers responsible under civil law. Thus, the use of autonomous weapons systems would create an accountability gap.
Human actors, whether soldiers on the battlefield or police officers responding to law enforcement situations, also violate such human rights, sometimes egregiously. Unlike autonomous weapons systems, for which many of the concerns raised in this report are intrinsic and immutable, however, people can and do uphold the rights of others every day. People can also face, understand, and abide by the consequences of their actions when they do not. Machines cannot engage in any of those actions.
The infringements on these six human rights obligations and principles exemplify the range of problems raised by autonomous weapons systems in times of armed conflict and law enforcement operations. The first two are particularly relevant to the systems’ use of force (life and peaceful assembly). The next two relate to foundational cross-cutting principles (dignity and non-discrimination). The final ones show that infringements are also implicated at different stages of the systems’ lifecycle, including the development stage (privacy) and after an attack (remedy). While international human rights law is not the exclusive way to frame the concerns with autonomous weapons systems—they also present ethical, security, international humanitarian law, and other threats—human rights is a critical lens through which to look at this rapidly emerging technology.