Adobe AI Can Recognize Photoshopped Images
Even since Adobe Photoshop was released back in 1990, image editing and manipulation have come a long way. These days editing tools are so efficient that it is becoming increasingly difficult to tell the difference between an authentic image and its edited counterpart.
This has led to the creation of more and more realistic deepfakes, some of which are so genuine that even news channels are covering them, e.g. the recent fake videos of Facebook CEO Mark Zuckerberg and US Speaker Nancy Pelosi that were spread of various social media platforms.
This creates a security issue for both the general public, as well as famous personalities that can be hurt by fake videos and pictures. To address this issue, Adobe, along with researchers from the University of California, Berkeley, have created a new type of AI, (based on a convolutional neural network) that is trained to detect any and all facial manipulation done to images using Adobe Photoshop, specifically the "Face Away Liquify" feature.
They did so by scripting Photoshop to automatically alter random parts of thousands of images, using the Face Aware Liquify feature. In addition, they even got an artist to alter another set of images that were mixed in to the data set. They then tested the program against regular human users to see how accurately can the software tell the difference between normal and photoshopped images. The results showed that while human eyes were able to identify the photoshopped face 53 percent of the time, the AI did so correctly 99 percent of the time.
"This is an important step in being able to detect certain types of image editing, and the undo capability works surprisingly well. Beyond technologies like this, the best defence will be a sophisticated public who know that content can be manipulated, often to delight them, but sometimes to mislead them as well," said Gavin Miller, Head of Research, Adobe.