What Are Deepfakes?

Jun 11, 2019 By James H, Writer
jh_youngzine's picture

If the lady in the Mona Lisa painting could talk, she could tell us why she was smiling for the pose, isn't it! But, of course, that is not possible because she is just a painting.

However, recently, Samsung Labs in Moscow demonstrated an AI program that could create a video of a person talking just from one single profile picture. The result? A talking Mona Lisa, thanks to a technology known as deepfake!

What is Deepfake?

The word "deepfake" is a combination of the words “deep learning” and “fake.” Deep learning refers to the use of machine learning and artificial intelligence to create images of human faces. The word was used first in 2017 when an anonymous person using the name “deepfakes” began to post images of celebrities’ faces on other people’s bodies.

Film companies sometimes use this technology in their movies. Now, even amateurs can access this technology through FakeApp, an application for creating deepfakes. So, how does this technology work?

Machines That Learn

To start with, video recordings of a person are broken down into the smallest levels of detail that capture how their mouth and facial features move when they pronounce a sound like "oo" or "ah." These, along with the 3D model of the lower face, are then put together and the person can be made to say words he (or she) never did.

Deepfakes use a technology called generative adversarial networks (GAN). This system uses two separate artificial intelligence systems that are trained such that one generates the images and the other attempts to tell if they are fake. The machines continue to teach each other over and over again until one produces a video that the other cannot tell is fake!

The Dangers

The more data there is about people, the easier it is to produce a credible deepfake. This is why famous people whose images and videos are all over the internet are easy targets. If celebrities are deepfaked into doing something scandalous that they did not do at all, their reputation could be at tremendous risk because the public will not be easily convinced.

Fake news would easily go out of hand if people believed the fake videos as real and it could have political and social impact. As there is no current law that prevents such malice, governments are working on legislation that will control the use of deepfakes.

There is a lesson here for each of us too to be careful about what we post on the internet. In the future, you might see a picture or video of yourself and may not be able to tell that it is fake! It all just goes to show that seeing is not always believing. 

Sources: USA Today, csoonline, The Verge, Wired, Business Insider