You are using an older browser version. Please use a supported version for the best MSN experience.

Top Stories

The antidote to deepfakes? Adobe unveils new AI that can detect manipulated faces

Daily Mail logo Daily Mail 14/6/2019 Annie Palmer For Dailymail.com
SAN FRANCISCO, CA JULY 1, 2018: Entrance to Adobe San Francisco office location in historic Baker and Hamilton warehouse © Getty SAN FRANCISCO, CA JULY 1, 2018: Entrance to Adobe San Francisco office location in historic Baker and Hamilton warehouse

Adobe researchers have developed an AI tool that could make spotting 'deepfakes' a whole lot easier. 

The tool is able to detect edits to images, such as those that would potentially go unnoticed to the naked eye, especially in doctored deepfake videos. 

It comes as deepfake videos, which use deep learning to digitally splice fake audio onto the mouth of someone talking, continue to be on the rise.  

WHAT IS A DEEPFAKE VIDEO? 

Deepfakes are so named because they utilise deep learning, a form of artificial intelligence, to create fake videos. 

They are made by feeding a computer an algorithm, or set of instructions, as well as lots of images and audio of the target person. 

The computer program then learns how to mimic the person's facial expressions, mannerisms, voice and inflections. 

Adobe researchers have developed an AI tool that could make it easier to spot 'deepfakes'. The tool is able to detect edits to images, particularly those in doctored deepfake videos © Provided by Associated Newspapers Limited Adobe researchers have developed an AI tool that could make it easier to spot 'deepfakes'. The tool is able to detect edits to images, particularly those in doctored deepfake videos If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.

'While we are proud of the impact that Photoshop and Adobe's other creative tools have made on the world, we also recognize the ethical implications of our technology,' Adobe wrote in a blog post. 

'Trust in what we see is increasingly important in a world where image editing has become ubiquitous - fake content is a serious and increasingly pressing issue. 

'...This new research is part of a broader effort across Adobe to better detect image, video, audio and document manipulations,' the firm added.  

Adobe worked with researchers from the University of California, Berkeley to develop the tool, which is described in a new Arxiv paper published this week, titled 'Detecting Photoshopped Faces by Scripting Photoshop.' 

In addition to being more accurate than humans, researchers found that the AI system could  also 'revert' the manipulated image back to its original state © Provided by Associated Newspapers Limited In addition to being more accurate than humans, researchers found that the AI system could  also 'revert' the manipulated image back to its original state The AI was built to detect the use of a Photoshop tool called Face Aware Liquify, which warps elements of a person's face.

According to Adobe, the Face Aware Liquify tool makes very subtle adjustments to a person's face, making them hard to detect. 

But throughout their research, they determined that the AI system they developed could spot manipulation with greater accuracy than humans.

Not only that, but it could also 'revert' the manipulated image back to its original state. 

'We show that our model outperforms humans at the task of recognizing manipulated images, can predict the specific location of edits, and in some cases can be used to "undo" a manipulation to reconstruct the original, unedited image,' the study notes.

a group of people posing for a photo: Adobe determined that the AI system could spot manipulation with greater accuracy than humans. To build the system, they trained the AI system using fake images such as these © Provided by Associated Newspapers Limited Adobe determined that the AI system could spot manipulation with greater accuracy than humans. To build the system, they trained the AI system using fake images such as these To build the AI system, researchers trained the AI using a collection of thousands of fake images. The fake images were created by applying the Face Aware Liquify feature to a set of real photos scraped from the internet. 

Overall, humans were able to correctly spot an altered image 53 percent of the time, whereas the AI system achieved a success rate of nearly 99 percent.  

'This is an important step in being able to detect certain types of image editing, and the undo capability works surprisingly well,' Gavin Miller, head of Adobe Research, said in a statement.   

'Beyond technologies like this, the best defense will be a sophisticated public who know that content can be manipulated — often to delight them, but sometimes to mislead them.'  


More From Daily Mail

image beaconimage beaconimage beacon