There have been increasing reports on the use of Artificial Intelligence (AI) by the United States and Israel in their ongoing attacks on Iran. Several reports have suggested that to strike around 1000 targets in the first 24 hours of the attack on Iran, the United States military used Anthropic AI’s tool Claude. The AI tool helped in war-planning by optimizing target selection, analyzing intelligence data and issuing precise location coordinates by assessing satellite images. The use of Claude AI is part of the Pentagon’s Project Military Maven Smart System. The Maven system, built by a company called Palantir, is generating insights using classified data from satellites, surveillance and other intelligence sources to provide real-time targeting options for the ongoing war against Iran. The increasing use of AI “shortens the kill chain” – reducing the time between identifying the target and neutralizing it. This leads to decision compression in which human actors increasingly rely on algorithmic recommendations rather than independent judgment. In the absence of any kind of binding agreements on the responsible use of military AI, the risks are incrementally increasing.
While AI is increasingly being utilized as a tool for improving precision and operational effectiveness, its use raises serious concerns about accountability and the protection of civilians. Target-identification systems are only as reliable as the data they are trained on, and errors in classification can have catastrophic consequences. For example, there is a probability that the bombing of an Iranian school on the first day of attack – which resulted in the death of almost 150 children – can be a case of mistaken AI targeting.
Apart from the United States, Israel’s military has also deployed AI system ‘Lavender’ in its attacks on Gaza. The AI-powered database was used to identify as many as 37,000 potential targets for attacks. Despite Lavender having an error rate of 10 percent, it was utilized to fast-track the identification and targeting of low-level Hamas operatives risking the lives of thousands of civilians. The increasing use of AI for military purposes rekindles the ethical and moral concerns regarding the use of technology.
There is an ongoing debate regarding the responsible use of military AI. However, states are yet to undertake any concrete steps to reduce the risks attached with it. The rapid integration of AI into real-world military operations showcases that governance efforts are struggling to keep pace with technological adoption. The example of Project Maven of Pentagon shows how military strategists increasingly view AI as essential for maintaining operational superiority.
International Humanitarian Law (IHL) requires that military operations adhere to the principles of distinction, proportionality and precaution. Commanders have to distinguish between civilians and combatants, to ensure that military advantage outweighs potential civilian harm. When AI systems are embedded deeply in the targeting cycle, it becomes increasingly difficult to determine whether these obligations are being fulfilled or not. If a strike based on algorithmic recommendations leads to civilian casualties, assigning responsibility becomes complicated. The chain of accountability becomes diffused between commanders, programmers, private technology firms and opaque machine-learning systems.
Another major concern lies in the opacity of AI decision-making. Many AI models function as black boxes – even developers cannot fully explain how the system arrived at a particular output. When such systems are used to generate targeting recommendations, military personnel may find it difficult to challenge or verify the algorithm’s conclusions. This raises the risk that human oversight becomes procedural rather than substantive. In highly compressed decision environments, commanders may simply approve algorithmically generated target lists rather than conduct rigorous independent verification.
There are several global initiatives for increasing responsible use of AI in the military. However, these are largely voluntary and lack enforcement mechanisms. In the absence of binding regulations, major powers continue to integrate AI into military operations at an accelerating pace. The ongoing war in the Middle East, therefore represent an early glimpse into the future of warfare. AI is no longer limited to logistics or intelligence support; it is increasingly shaping targeting decisions and operational planning. As military competition intensifies among major powers, the incentive to adopt AI-enabled capabilities will only grow stronger. States fear that restricting their use of AI could place them at a strategic disadvantage relative to rivals. Nevertheless, technological superiority should not come at the expense of ethical responsibility and legal accountability. Mechanisms should be developed to recognize that the states and individuals (rather than machines and algorithms) bear legal and ethical responsibility for the attacks undertaken. This can help in avoiding the creation of accountability gaps in the use of AI in the military domain. Moreover, governments must work toward establishing clear standards for transparency, accountability and human control in the use of AI in warfare. This includes ensuring that humans remain meaningfully involved in decisions regarding the use of lethal force, developing auditing mechanisms for military AI systems, and strengthening international legal frameworks governing emerging military technologies.
This article was published in another form at https://www.scmp.com/opinion/world-opinion/article/3345878/iran-strikes-are-wake-call-regulate-military-ai
Abdul Moiz Khan is Research Officer at the Center for International Strategic Studies (CISS), Islamabad.






