Artificial Intelligence and the War Without Rules of Algorithms: This is How Military Objectives and Strategies Are Chosen

2026-03-08 16:45:56 / MISTERE&KURIOZITETE ALFA PRESS

Artificial Intelligence and the War Without Rules of Algorithms: This is How

Modern warfare involves more than just increasingly accurate missiles, smart bombs, hard-to-intercept drones deployed en masse, or “flying assassins” that follow a soldier into the trenches. Today, it also includes robotic cannons, drones that block tankers at sea, underwater drones that sink ships in the Black Sea, sabotage underwater gas pipelines in the Baltic, or tap fiber-optic cables that connect continents. All of these are relatively easy and inexpensive for those who know how to use artificial intelligence (AI) skillfully.

In recent years, especially after the advent of language models like ChatGPT, the use of AI has exploded. And not just for physical attacks, like the coordinated bombings of Hezbollah facilities in Lebanon, reportedly carried out by Mossad.

With the proliferation of AI models in all activities, including military ones, and with the adoption of the most advanced intelligence, Claude from Anthropic, in the secret information flows of the US Armed Forces, this technology has become key to security. AI enters the lives of strategists as an assistant, but also as a heavy assistance: It analyzes with incredible speed endless amounts of data from the Internet, smartphones, satellites, credit cards, cameras and other surveillance systems, translating them within seconds into logistical plans, scenario simulations and engagement hypotheses.

In our daily use, we are enthralled by free services and promises of a freer and more conscious life through the internet. But the reality has been different: more inequality, the spread of extreme and brutal voices, the chaos of information infected with fake news, profound changes in the way we learn, communicate and socialize.

Risk of repeating mistakes

Now, when AI is changing both the way we fight and citizens' relationships with the state, which can use mass surveillance to limit freedoms, there is a risk of repeating the mistakes of the past, with even more devastating consequences.

This is the conflict between the Pentagon, which does not accept restrictions from private companies, and Dario Amodei, head of Anthropic, who warns of the great risks due to the lack of a legal framework for the technological “tsunami.” We have seen examples from the war in Ukraine, then in Gaza and now in Iran.

AI and the delegation of deadly decisions

AI can plan operations, partially replace strategists and, in particular, allows authorities deciding on the use of lethal force to delegate to machines decisions on the automatic targeting of targets and, in certain cases, to fire without final human command.

The problems of using AI have also been analyzed in Italy. According to Dario Guarascio, from the Iraq war to the one in Ukraine, the use of information systems has changed fundamentally: Algorithms do not only provide descriptive data, but also predictive and guiding data, directly indicating the actions to be taken, excluding the human being or marginalizing him.

According to journalist and military analyst Gianluca Di Feo, in the last four years of the war in Ukraine, 70% of casualties have been caused by killer drones, much cheaper than a missile or artillery shell, all operations led by AI.

A dramatic example of the use of AI in lethal decision-making was also given at the beginning of the Israeli operations in Gaza, where an algorithm selected 33,000 people suspected of being terrorists or Hamas supporters for targeting./ Corriere della Sera

Happening now...

ideas