How can we learn to trust AI?

The rapid adoption of AI means an increasing number of images we’re exposed to are now being created by these tools. CR examines the public scepticism around AI-generated imagery and what can be done to build trust

When it comes to the images we consume on a daily basis, our savviness around what’s real and what isn’t has developed a huge amount over the years. With the prevalence of tools like Photoshop, we’ve become increasingly aware that images can be manipulated and brands have had to be more transparent about this, for instance when brands in the beauty space like Maybelline and L’Oreal had to disclose the use of fake eyelashes in their mascara ads around a decade ago.

But the mass uptake of AI feels like a slightly different, more unwieldy beast. Image altering software typically changes images that have already been created by someone taking a photograph. But creating an entire image using AI tools is less clear, partly because we don’t always know the source material, the context in which the image appears can’t always be controlled, and sometimes the image is a fake masquerading as truth. Unsurprisingly, the recent adoption of the technology has led to a wave of scepticism and mistrust.

“It is something we’ve been looking at in a lot of detail … the excitement [for AI] within the industry is not represented in the general population,” says Rebecca Swift, senior vice president, creative at Getty Images. “There are more consumers who are feeling anxious about AI content, they don’t want to be lied to, and they don’t want to be fooled. They want to know whether an image has been created by AI and we’ve had the same result in terms of all the surveys we’ve done across the world.”

Top and above: Another America, Phillip Toledano