STOCKHOLM (Chatnewstv.com) — Bias in military artificial intelligence systems could lead to deadly mistakes and violations of international humanitarian law, according to a study released Tuesday by the Stockholm International Peace Research Institute (SIPRI).
The report, Bias in Military Artificial Intelligence and Compliance with International Humanitarian Law, found that demographic bias around “gender, ethnicity, disability status, age, socio-economic class, language and culture” can creep into AI-enabled autonomous weapons and targeting systems, often with devastating consequences.
“Bias in military AI cannot be entirely removed, but its consequences can be mitigated,” the study’s authors, Laura Bruun and Marta Bo, wrote.
“Misidentifications stemming from bias could lead to violations of the principle of distinction,” a cornerstone of international humanitarian law that requires militaries to separate combatants from civilians.
States involved in policy debates on military artificial intelligence are increasingly expressing concern about bias, the report said, though “these concerns are rarely discussed in depth, much less from a legal lens.” Drawing from insights gained during an expert workshop in Stockholm, the study unpacks what bias in military AI means and where it comes from, then examines its implications for compliance with the laws of war. It focuses on AI-enabled autonomous weapons and decision-support systems, before outlining technical, operational and institutional measures to address bias and strengthen compliance.
The report warned that biased AI could misclassify civilians or protected sites as military targets, or fail to identify vulnerable groups such as people with disabilities, exposing them to greater harm. Such errors, it said, risk breaching the legal principles of proportionality and precautions in attack.
Dan Smith, SIPRI’s director, said the findings highlight a pressing need for safeguards. “Bias in these systems is not just a technical flaw — it carries humanitarian consequences that can undermine the laws of war,” he said.
While acknowledging that no AI system will ever be entirely free from bias, the report urged governments to take steps including using more representative data, improving transparency, and maintaining human oversight in targeting decisions. It also called on states to develop national expertise to assess bias when procuring military AI and to clarify what international law requires for addressing the problem.
“The lawful use of military AI depends on recognizing that bias is not an abstract concern,” Bruun said.
“It’s about life-and-death decisions on the battlefield.”
The 70-page study, published in August, includes sections on characterization of bias, legal implications, mitigation measures and key recommendations.



