“The situation is going to get worse – much worse – before it gets better.”
A staggering number of viral photos from both sides of the Israel-Hamas conflict have been revealed to be AI fakes – and experts say the problem will only get worse.
In interviews with the Associated Pressresearchers from several companies and organizations tasked with verifying the veracity of online claims said there had been a sinister influx of doctored AI images showing children being massacred, used to shift blame to each side in the conflict bloody relationship between Israel and the terrorist group Hamas. .
Imran Ahmed, CEO of the Center for Countering Digital Hate, said that whether people share images out of context from the long list of previous conflicts in Israel/Palestine or more recent digital dupes – or in some cases a combination of both – the The heartbreaking effect is the same.
“People are being told right now: look at this picture of a baby,” Ahmed said. “The misinformation is designed to trick you into participating.”
Although this is far from the first conflict in which AI-manipulated propaganda has been used – Russia’s invasion of Ukraine also sparked a similar wave of digital forgeries – fake photos of dead children or injured are not only a particularly horrific example of the cruel power of technology, but also a harbinger of worse things to come.
“The situation is going to get worse – much worse – before it gets better,” said Jean-Claude Goldenstein, CEO of digital verification company CREOpoint, which created a database of viral deepfakes in Gaza. P.A.. “Images, video and audio: with generative AI, this is going to be an escalation you haven’t seen.”
Layers of Deception
Make no mistake: a staggering number of children to have died in the bloody conflict. Obfuscating the issue with false images distorts reality in many ways – including giving people who see real documenting the horrors of war is an excuse to dismiss it as a fabrication of AI.
And to make matters even more complicated, tools meant to detect whether a photo is real or AI-manipulated can sometimes get it wrong – an increasingly well-known problem that bad actors can also exploit to sow further discord in a conflict that is not lacking. of it.
While dedicated disinformation artists are always one step ahead of efforts to debunk them, digital security expert David Doermann, who led the efforts of the Defense Advanced Research Projects Agency (DARPA ) of the Pentagon on the national security risks of AI-related disinformation, told the P.A. that there needs to be efforts between governments and the public and private sectors not only to strengthen their technology, but also to enforce stricter regulations and standards.
“Every time we release a tool that detects this, our adversaries can use AI to cover up these traces,” said Doermann, now a professor at the University at Buffalo. P.A.. “Detecting and trying to remove these things is no longer the solution. We need a much larger solution.”
Learn more about AI manipulation: Benzinga withdraws ‘interview’ with rapper allegedly generated by AI