Hamas Atrocities Are AI-Generated, Claim Multiple Sources

Hamas Atrocities Are AI-Generated, Claim Multiple Sources

Source Node: 2937838

According to multiple sources, a picture of a burned Jewish baby killed by Hamas is the work of AI-generative software.

But an investigation by MetaNews suggests it is the ‘proof’ of fakery that is fake. The issue is a microcosm of a far deeper problem, as ordinary people attempt to distinguish reality from simulation and the truth from lies.

A post-truth world

Horrific images of a charred infant’s corpse are the work of generative AI. This is according to multiple outlets, including Times Now News and Defense Politics Asia. Influencer Jackson Hinkle, best known for his pro-Russia propaganda, is amplifying the same message through his social media channels.

The dispute began on Thursday when Jewish conservative Ben Shapiro shared the burned baby photograph on X, citing it as proof of Hamas brutality. A clearly emotional Shapiro did not hold back in his condemnation.

“You wanted pictorial proof of dead Jewish babies?” asked Shapiro. “Here it is, you pathetic Jew-haters. Israel will minimize civilian casualties. But Israel will not allow the pieces of human shit who did this to live. Every ounce of blood spilled in Gaza is on Hamas.”

It was only a short time before questions about the validity of the images emerged, however.

Political propaganda

The death of a child is always highly emotional. This makes the subject ripe for political propaganda.

If genuine, the image of a charred child’s corpse exposes the extreme brutality of Hamas terrorists who invaded Israel on Oct. 7.

Some outlets and critics on social media suggest the image is a concoction to create false sympathy for Israel and condemnation for Hamas. Their claim rests on two main points of evidence. Firstly, an AI tool called “AI or Not” said the photo was AI-generated. Secondly, the original, real photo was not of a charred baby but of a puppy.

On the first claim, “AI or Not” does not appear to be a wholly reliable tool. After testing the same image multiple times, the platform changes its mind as to the validity of the photograph. Given its ever-shifting schizophrenic responses, “AI or Not” offers absolutely nothing of value.

The second point, regarding the puppy photograph, can more easily be determined. The source of this image is X user Stellar Man. Stellar Man created the puppy image in “5 seconds” so they could demonstrate how easy it is to fake photographic imagery.

The demonstration worked far too well. Some users are now passing off the faked puppy image as the original to “prove” the baby image is fake, and some media outlets are running with the fake puppy image.

In response, Stellar Man deleted the photo and said, “My meme has gone too far.”

Choose your own reality

In itself, the faked puppy image does not prove or disprove the authenticity of the charred baby photo. But the fake puppy shot does demonstrate just how easily people will believe in anything that appears to confirm their existing beliefs and biases.

Faking photographs in the age of AI is easier than ever before, creating an environment where people can no longer trust their eyes.

That should be where credible media outlets step in to fill the void, doing proper and necessary investigative work. The fact that some media outlets are offering up the puppy photo as proof of AI fakery is obviously concerning. Given that some sections of the press are clearly incapable of even the most basic journalistic checks and of following the evidence wherever it might lead, how is the general public expected to do any better?

Time Stamp:

More from MetaNews