AI Breakthrough: Can You Trust the Photos You See?

Researchers have found that AI can now generate images of real people that are nearly indistinguishable from actual photos, posing serious implications for trust in visual media and the urgent need for reliable detection tools.

Artificial intelligence has reached a new milestone, and its implications are both fascinating and troubling. A study recently published in the journal Cognitive Research: Principles and Implications has shown that AI can now generate images of real people that are virtually impossible to differentiate from genuine photographs. This breakthrough heralds a new level of “deepfake realism” and ignites urgent concerns about misinformation and trust in visual media.

Researchers from Swansea University, the University of Lincoln and Ariel University collaborated to create hyper-realistic images using advanced AI models, including ChatGPT and DALL·E. These AI models generated images of both fictional and well-known faces, such as celebrities, and found that even familiar faces were indistinguishable from real photos.

“Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people,” Jeremy Tree, a professor in the School of Psychology at Swansea University, said in a news release.

Across four separate experiments, the researchers found that the use of comparison photos and participant’s prior familiarity with some faces were of limited help.

In one experiment, participants from various countries, including the United States, Canada, the UK, Australia and New Zealand, were shown a mix of real and AI-generated faces. The researchers found that participants struggled to identify which images were real, highlighting just how convincing AI-generated photos can be.

In another experiment, participants were shown images of Hollywood stars like Paul Rudd and Olivia Wilde and asked to distinguish genuine photos from AI-generated ones. The participants’ frequent misidentification underscored the sophistication of these synthetic images.

“The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in visual media but also the need for reliable detection methods as a matter of urgency,” Tree added.

The implications of this technological advancement are profound. AI-generated images could be misused in various ways, such as creating deceptive endorsements from celebrities or political figures. This could significantly influence public opinion and erode trust in visual content.

“This study shows that AI can create synthetic images of both new and known faces that most people can’t tell apart from real photos. Familiarity with a face or having reference images didn’t help much in spotting the fakes, that is why we urgently need to find new ways to detect them,” added Tree. “While automated systems may eventually outperform humans at this task, for now, it’s up to viewers to judge what’s real.”

Source: Swansea University