COMD 1112 Sec. D112
Prof. Tracie Schaeffer
Friday 8:00am – 11:20am
Jennie Zhu Pan
In Modern Times, Seeing Is Not Believing; Deepfakes
Is your motto in life, “seeing is believing”? If so, you might be in big trouble. Nowadays,
it is firmly praised our ability to create content, and being able to share it, to literally, the whole world. This contemporary ability is an incredibly powerful double-edged sword while being able to promptly and effortlessly share educational videos, images, and texts, is it indeed advantageous. However, this immense sharing ability it quickly becomes dangerous if it is used to spread misinformation or fakes. In 2017, a new type of software became widely available on the internet, a software capable of swapping extremely well people’s faces using machine learning. The name of this program is FakeApp and its users have the ability to create very convincing videos without the need of having programming language skills or knowledge. The amount of threat that this new technology opposes to the masses is extremely large because it strongly challenges our ability to distinguish reality and questions the legitimacy of digital media sources.
The very infamous software is called FakeApp, but how does it work? First, we need to understand how face detection technology works. For the past decades, cameras have had some kind of real-time face detection algorithm, which can recognize people faces using light and shadows to recognize a face, and FakeApp is a program that uses images to create videos, using machine learning and face recognition technology. The program uses images of faces as database (the images that can be automatically downloaded by the program, using image search engines such as google, bing, and duckduckgo), after the images are collected of the program is to find a face in the image, and for this it turns the images in black and white to reduce the amount of information that the image has for easier lightning pattern recognition, if the image is black and white it is easier for the program to figure out which pixel in the image is darker compared to the surrounding pixels. By using this previous method, it facilitates the process of finding the pattern of a face and hence a face. After the program has obtained the model of the faces then the program uses them to create the fake models in the video via trial and error using machine learning and neural networks (Raval). The program is able to detect when the face model was erroneously placed, again by comparing the lightning patterns and since the software can teach itself in how to correctly superimpose the images, based on all the samples collected with enough time and many tries, you will have very fake and convincing video.
This technology at some point was inevitable, however, the problem is how widely available and how easy it is to use. Many public figures and singers have already been a victim of this technology. Faces of famous public figures have been swapped into pornographic videos in very convincing ways, and these videos are literally undetectable to the untrained eye, some early victims were Daisy Ridley, Gal Gadot, Scarlett Johansson, and Taylor Swift and their videos quickly went viral, if powerful and famous are not able to prevent it, imagine your own face being posted onto the web on a fake video, the FakeApp was also used to create vast amounts of revenge porn, imagine that you are able to create and share a fake pornographic video, that looks real and very convincing of someone you despise or hold a grudge against of. Obtaining the images would not be a problem since you could easily extract them from their social media (DeFranco). According to Eric Goldman, a law professor at Santa Clara University, the law won’t be able to help the victims since United States privacy laws do not apply to the fake videos, taking the videos down could be acknowledged as a prohibition or censorship (Farokhmanesh). In order to remove or take down a swap-face video off the internet is to either claim defamation or copyright, but they do not have a guaranteed path of success, because none of those apply into these fake videos, even if other laws can apply none of them can fully cover the creation of fake pornographic videos, because those fake videos were not copied from anywhere besides taking and using someone else’s face that can count as a mask on someone who was willing to publish their body on the internet, it all count as a “creation” and “art”. Nevertheless, many websites have completely banned deepfakes including, Reddit, Twitter, and Pornhub (Ellis).
Although public figures’ faces are placed into pornography, it is indeed worrying and a serious issue, but it is not as severe as politicians, or people that hold a strong position of power being used to create those type of fake videos, in which they say or do very outrageous things. It also opens the door to the other extreme, where people with strong positions of power, get caught in video and strongly deny the video evidence against them, arguing that it is a fake video. That is terrible because it can be the beginning of the truth getting buried and the end of the targeted person as a victim. Not many people have awareness regarding this type of software or technology, meaning that many could be easily fooled by these types of videos and honestly, nobody can blame them if they’re not aware of this technology. Sharing these type of videos can have huge and lifetime repercussions, and be very damaging to the victims, but even more to societies, which can be easily locked in a consistent stream of fake media.
Experts have relentlessly worked against deepfakes, and a possible solution to recognize them is by analysing the blinking pattern of the subject, according to Siwei Lyu one of the flaws of a fake video is that in many cases it can be detected if it is fake by the way the person in the video is blinking, in many videos the blinking seems unnatural and in some looks artificial. There are even some videos in which, the person doesn’t even blink once while talking. Which is a huge flaw, because a person’s average blinking rate is one per 2 to 10 seconds, and a single blinking takes between 0.1 and 0.4 of a second when a person is talking. Blinking patterns are key to recognize if the video is a fake or not, and the knowledge of not believing everything that is presented to you.
In the world of social media, people should start to think carefully before posting something on the internet, because there is no remedy or magic that could delete the existence of something that has been uploaded to the internet, once it is on the internet it would always be on the internet, and deepfake is sure going to keep developing its machine learning technology, where we would not be able to notice the differences and that would be scary, lies and conflicts caused by fake videos will be swimming around the internet and the ultimate truth of things that we see with the eye, might be buried forever.
Ellis, Emma Grey. “Yes, People Can Put Your Face on Porn. No, the Law Can’t Help You.”
Wired, Conde Nast, 26 Jan. 2018, www.wired.com/story/face-swap-porn-legal-limbo/.
DeFranco, Philip. “Well… I Don’t Know What’s Real Anymore. DeepFakes and FakeApp Usher
In New Age Of Fakes.” YouTube, YouTube, 31 Jan. 2018, www.youtube.com/watch?v=Uivy6vnP2B0. 3 Mar. 2019.
Lyu, Siwei. “The Best Defense against Deepfake AI Might Be . . . Blinking.” Fast Company,
Fast Company, 3 Aug. 2018,
ng. 3 Mar. 2019.
Baker, Henry, and Christian Capestany. “It’s Getting Harder to Spot a Deep Fake Video.”
YouTube, Bloomberg, 27 Sept. 2018, www.youtube.com/watch?v=gLoI9hAX9dw. 3 Mar.
Raval, Siraj. “DeepFakes Explained.” YouTube, YouTube, 2 Feb. 2018,
www.youtube.com/watch?v=7XchCsYtYMQ. 3 Mar. 2019.
Farokhmanesh, Megan. “Is It Legal to Swap Someone’s Face into Porn without Consent?” The
Verge, The Verge, 30 Jan. 2018,
www.theverge.com/2018/1/30/16945494/deepfakes-porn-face-swap-legal. 3 Mar. 2019.