Gaza civilian deaths test Israel's AI precision claims

W460

The Israeli military has said AI helps it more accurately target militants in its five-month war against Hamas, but as Gaza deaths rise, experts are questioning how effective algorithms can really be.

The health ministry in the Hamas-run Gaza Strip says the war has killed upwards of 30,000 people, the majority of them civilians.

"Either the AI is as good as claimed and the IDF (Israeli military) doesn't care about collateral damage, or the AI is not as good as claimed," Toby Walsh, chief scientist at the University of New South Wales AI Institute in Australia, told AFP.

The health ministry does not specify how many militants are included in the Gaza toll.

Israel has said its forces "eliminated 10,000 terrorists" since the war began in early October, triggered by a deadly Hamas attack on southern Israel.

Israel's claimed use of algorithms adds another layer of concern for activists already alarmed by artificial intelligence-powered hardware like drones and gunsights that are being deployed in Gaza.

The Israeli military told AFP it had no comment on its AI targeting systems.

But the army has repeatedly claimed its forces target only militants and take measures to avoid harm to civilians.

- 'Precise attacks' -

Israel began hyping AI-powered targeting after an 11-day conflict in Gaza during May 2021, which commanders branded the world's "first AI war".

The military chief during the 2021 war, Aviv Kochavi, told Israeli news website Ynet last year that the force had used AI systems to identify "100 new targets every day."

"In the past, we would produce 50 targets in Gaza in a year," he said.

The current Gaza offensive began when Hamas launched an attack on October 7 that allegedly resulted in the deaths of about 1,160 Israeli soldiers and civilians according to Israel.

Weeks later, a blog entry on the Israeli military's website said its AI-enhanced "targeting directorate" had identified more than 12,000 targets in just 27 days.

An unnamed Israeli official was quoted as saying the AI system, called Gospel, produced targets "for precise attacks on infrastructure associated with Hamas, inflicting great damage on the enemy and minimal harm to those not involved."

But an anonymous former Israeli intelligence officer, quoted in November by independent Israeli-Palestinian publication +972 Magazine, described Gospel's work as creating a "mass assassination factory."

Citing an intelligence source, the report said Gospel crunches vast amounts of data faster than "tens of thousands of intelligence officers" and identifies, in real time, locations likely to be used by suspected militants.

However, the sources gave no detail of the data put into the system or the criteria used to determine the targets.

- 'Dubious data' -

Several experts told AFP the military was likely to be feeding the system with drone footage, social media posts, information from agents on the ground, mobile phone locations and other surveillance data.

Once the system identifies a target, it could use population data from official sources to estimate the likelihood of civilian harm.

But Lucy Suchman, professor of anthropology of science and technology at Britain's Lancaster University, said the idea that more data would produce better targets was untrue.

Algorithms are trained to find patterns in data that match a certain designation -- in the Gaza conflict, possibly "Hamas affiliate", she said.

Any pattern in the data matching a previously identified affiliate would generate a new target, but any "questionable assumptions" would be amplified, Suchman explained.

"In other words, more dubious data equals worse systems."

- Humans in control -

The Israelis are not the first fighting force to deploy automated targeting on the battlefield.

As far back as the 1990-91 Gulf War, the U.S. military worked on algorithms to improve targeting.

For the 1999 Kosovo bombing campaign, NATO began using algorithms to calculate potential civilian casualties.

And the U.S. military had hired secretive data firm Palantir to provide battlefield analytics in Afghanistan.

Backers of the technology have repeatedly insisted it will reduce civilian deaths.

But some military analysts are sceptical that the technology is advanced enough to be trusted.

In a blog post for the British Royal United Services Institute defence think-tank, analyst Noah Sylvia said last month that humans would still need to cross-check every output.

The Israeli military is "one of the most technologically advanced and integrated militaries in the world," he said.

But "the odds of even the IDF (Israeli army) using an AI with such a degree of sophistication and autonomy are low."

Comments 1
Thumb i.report 04 March 2024, 00:47

Decision-making in wars involves moral judgments and considerations that extend beyond cold calculations. Allowing AI to handle such decisions basically means dehumanizing the conflict, diminishing the importance of moral reasoning, and eroding the compassionate aspects that guide human decision-makinhg ! AI for such decisions could oversimplify complex ethical dilemmas, potentially leading to outcomes that lack the depth and humanity.