Adobe is selling artificially generated, realistic images of the Israel-Hamas war which have been used across the internet without any indication they are fake.
As part of the company’s embrace of generative artificial intelligence (AI), Adobe allows people to upload and sell AI images as part of its stock image subscription service, Adobe Stock. Adobe requires submitters to disclose whether they were generated with AI and clearly marks the image within its platform as “generated with AI”. Beyond this requirement, the guidelines for submission are the same as any other image, including prohibiting illegal or infringing content.
People searching Adobe Stock are shown a blend of real and AI-generated images. Like “real” stock images, some are clearly staged, whereas others can seem like authentic, unstaged photography.
This is true of Adobe Stock’s collection of images for searches relating to Israel, Palestine, Gaza and Hamas. For example, the first image shown when searching for Palestine is a photorealistic image of a missile attack on a cityscape titled “Conflict between Israel and Palestine generative AI”. Other images show protests, on-the-ground conflict and even children running away from bomb blasts — all of which aren’t real.
Amid the flurry of misinformation and misleading online content about the Israel-Hamas war that’s circulating on social media, these images, too, are being used without disclosure of whether they are real or not.
A handful of small online news outlets, blogs and newsletters have featured “Conflict between Israel and Palestine generative AI” without marking it as the product of generative AI. It’s not clear whether these publications are aware it is a fake image.
RMIT senior lecturer Dr T.J. Thomson, who is researching the use of AI-generated images, said there are concerns about the transparency of AI image use and whether audiences are literate enough to recognise their use.
“There is potential for these images to mislead folks, to distort reality, to disrupt our perception of truth and accuracy,” he told Crikey.
Thomson said discussions with newsrooms as part of his research have found that fears about the potential for misinformation are a top concern, but there are also questions about the labour implications of using AI images rather than on-the-ground photographers.
While AI images can be a tool, he also warned of their misuse: “You don’t want to be overly cautious, you don’t want to be scared of everything, because there are good reasons to use them. But you also have to have a bit of wisdom and cautiousness.”
Adobe did not respond to a request for comment.
Crikey encourages robust conversations on our website. However, we’re a small team, so sometimes we have to reluctantly turn comments off due to legal risk. Thanks for your understanding and in the meantime, have a read of our moderation guidelines.