AI solutions are needed to stop dangerous DeepFake videos

deepfake video, voice, authentication

You may have seen this video, a “DeepFake,” publicised by the BBC. It calls under threat the very facets of democracy. In the fake video, Labour leader Jeremy Corbyn is seen to endorse Conservative leader Boris Johnson, and vice versa – each politician telling viewers to vote for the other in the 12 December Parliamentary election.


Obviously, it’s easy to see that the video is a fake. The DeepFake video was created as an academic exercise, to show how dangerously falsified videos can undermine the democratic process. In this case, the video was created by a research firm called Future Advocacy and UK Artist Bill Posters.


What can be done about DeepFake videos?

We believe that we may have one solution for determining DeepFakes – but first, it’s worthwhile to examine why they are so dangerous in today’s political climate.

Undermining democracy

It’s easy to see how false information could sway voters. At a time of relative political unrest, voters are sensitive to the information that is fed to them on their social media feeds. Voters also may have the sense that if it appears on their social media feed, it is handpicked, just for them. That sense of “personal communication” may make the ad more powerful than if they saw it on TV, for instance. Maybe it’s no surprise that changes in polls seem to happen on a daily basis – and anyway, polls no longer seem to be accurate measures of election outcomes.

The rising pace of propaganda

It’s important to remember that propaganda is hardly a new concept – only the vehicle through which it is served to voters is new. In the age of social media, the main thing that has changed is that propaganda can be created quickly – and disseminated to millions of people within hours.

In the United States, so-called “fake news” was blamed, in part, for Trump’s surprise victory over Hilary Clinton in the 2016 Presidential election. Misinformation about Clinton’s health, and other topics, spread virally over social networks like Twitter and Facebook. Closer to home, highly targeted advertisements were fingerpointed as helping sway voters to vote ‘Leave’ in the Brexit referendum on 23rd June, 2016. Facebook released all of those ads to regulators who were seeking to determine if the targeted ads had infringed upon campaign law.

In both cases, in the UK and the US, this information was released mere days before the election – sometimes, the day before the election – when independent voters may have still been deciding upon their vote.

All of this adds up to one statement – DeepFakes are just one way that voters can be swayed.

In November, Twitter took a bold move to ban all political advertisements. Jack Dorsey Tweeted the following:

“Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale.”

Then this week, Google has joined the bandwagon, indicating that it will no longer allow “micro-targeting” and that it will take further measures to ban DeepFakes. To date, Facebook has not taken a stance on what it will do to curb political advertising.


So, how can AI stop DeepFake videos?

Clearly, there is a need for automated solutions that can stop DeepFakes. The first step is separating them from legitimate videos – no human solution would be scalable enough for this gigantic task, even for giants like Facebook. Each day, 95 million photos and videos are shared on Instagram – and that is only one social network.

This scalability clearly shows that the only way to fight an AI-created problem, is with AI.

Liopa is developing LipRead, an automated lipreading technology based upon deep learning algorithms. We’re currently evaluating whether it could be a potential detection device for Deep Fake videos.

In LipRead, AI algorithms detect speech via lip movements – so they can see what a person is saying on camera from video alone, without an audio feed. In DeepFake videos there are major discrepancies in the lip movements and the audio – because even though the fake video is made with powerful AI algorithms, they will never get the lip movements 100% accurate. (At least, not with the computing power available today.)

We believe that LipRead could be one solution to the enormous problem facing today’s democratic process.

Click here to find out more about how LipRead works.

Share this entry

Share on twitter
Share on linkedin
Share on reddit
Share on facebook
Share on whatsapp
Share on email

Check out related content

WordPress Lightbox Plugin
Scroll to Top