Thank You

You are now registered for our Rouse Insights Newsletter

An alternate dimension: Deepfakes

Published on 18 Mar 2021 | 6 minute read
The final instalment of our virtual series explores deepfakes and the negative impact on data privacy.

In our previous contribution of this series on virtual beings, we examined what virtual influencers are and how they are used by brands to promote their products. This final contribution will explore what deepfakes are and how the growing use of deepfake technology is negatively impacting data privacy and how IP law may be its solution.

 

What are deepfakes?

Deepfakes are fabricated digital representations made using artificial intelligence. Though deepfakes are most commonly known through image or video formats, they can also be found in the form of audio. Deepfakes are usually created by collecting images of the facial or body data of a person and uploading the images to an AI algorithm called an autoencoder. The autoencoder reduces and reconstructs the images to the key facial features and body movements shared by the original model of the picture or video. This is then decoded so that the person’s information will be superimposed over the original model, allowing the target person’s likeness to appear over the model on the picture or video.

The first digitally rendered deepfakes were not of high quality and the average viewer would easily be able to discern the deepfake image or video over the original one, as most of them fell into the uncanny valley. These abnormal aspects include poor lip synching and strange facial expressions. However, deepfake technology is rapidly improving to correct such abnormalities. For instance, researchers at Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research have exhibited a new deepfake software that allows users to amend the transcript of a video to alter the words coming out of the person’s mouth.1 Developments in generative adversarial network (GAN) technology, the primary technology behind the development of deepfakes, have also served to make deepfakes more realistic to the human eye. Not only are deepfakes becoming more realistic with advancements in technology, they are also becoming more accessible to the average person and being used for various purposes.

 

Application

Deepfakes are applied by different people for multiple purposes. Such purposes include being used as new approaches for teaching lessons and for conducting experimental studies by academic or technical research groups. Deepfakes have also been used to recreate historical moments and bring art to life. For example, Scottish company CereProc ‘resurrected’ the voice of John F. Kennedy, making it possible for listeners to hear the speech he was due to deliver if not for his assassination. Meanwhile, Samsung’s AI Lab and the Skolkovo Institute of Science and Technology released an animated video that shows the Mona Lisa moving and speaking like a real person, while the Dalí Museum in Florida, United States offers an AI program that allows visitors to interact with a life-like Salvador Dalí through digital screens installed throughout the museum. Another common application involves using deepfakes to ‘de-age’ or ‘revive’ old or deceased actors for film production by visual effects studios. CGI doubles would be motion captured as they emulate facial and body movements of the original actor, and the likeness of the original actor would then be overlaid over the CGI double.

Deepfakes, however, are being increasingly used for nefarious purposes as they continue to be more accessible to the general population. In 2019, the CEO of a large energy firm fell victim to a fraudulent scam when cybercriminals used deepfake technology to impersonate the voice of a chief executive and ordered him to transfer approximately US$243,000 into another bank account. Deepfake technology has also been applied in pornography, where the faces of female celebrities were superimposed over existing pornographic videos. Deepfake videos have also been used to misrepresent well-known political figures, such as Donald Trump and Barack Obama. The rising use of such technology to create political videos has caused a lot of disinformation to be spread to the general population, often with adverse effects. In Gabon, an allegedly deepfaked video of President Ali Bongo on his first public address since falling ill in 2018 caused an unsuccessful coup against the government. A distorted video of United States politician Nancy Pelosi, which was slowed down to make it seem that she was slurring and stumbling over her words, affected her ratings in the 2020 United States presidential campaign. With the rise of ‘fake news’ in the digital age, the increasing use of deepfakes should be worrying to the general population.

Measures have been taken to counteract misleading deepfakes. In 2020, Facebook launched the Deepfake Detection Challenge, an ‘open, collaborative initiative to accelerate the development of new technologies that detect deepfakes and manipulated media’2. The challenge was an open competition where participants developed AI algorithms to spot deepfake videos. While the Deepfake Detection Challenge was a success, such measures, however, have limited effect in preventing deepfake technology from being used maliciously by others as advancements in such technology surpass that of detecting them.

 

Dangers of deepfakes

The various applications of deepfakes open up a multitude of opportunities for creative expression and innovative research and offers a new medium for people to interact and share information with others. However, deepfakes also pose dangerous legal and ethical risks and has contributed to prevalence of fake news and misinformation on the Internet today. As deepfakes become more realistic and accessible, it is crucial to examine the legal implications behind malicious deepfakes.

From a legal perspective, deepfakes pose a threat to data privacy through the unauthorised use of personal data when creating such content. As mentioned earlier, the process of making deepfake content requires the extraction of digital facial or body data of a person, but many creators of malicious deepfake content simply obtain such data without the subject’s consent. This issue arises particularly in relation to deepfake pornography as they are created by obtaining personal data of a person without their consent and superimposing it onto pornographic content, leading to the humiliation and exploitation of individuals. Meanwhile, deepfakes involving political figures can have very negative consequences as they can damage the reputation of the political figure, enhance civil discourse between opposing populations, and even posing a threat to national security.

Difficulties may arise when dealing with these violations of privacy. One such approach is to reduce the number of personal images or videos shared on the Internet by restricting publishing only to private accounts and to keep updated on deepfake news. While this option may solve most problems caused by deepfake content, this solution may not be ideal for public figures such as celebrities and politicians as their occupation relies on their personal brand. A more appropriate method would be to impose legal liability over the wrongdoers instead. Given the nature of deepfake content, issues in relation to legal practice may arise. For example, it would be difficult to identify the wrongdoer because multiple parties may be involved: the person who created the deepfake, or the person who uploaded the content online. Establishing who the actual wrongdoer was would be a big obstacle for parties who wish to impose legal liability.3

Assuming a wrongdoer has been established, the next major problem would be how the party can enforce their rights against them. Many jurisdictions around the world do not have laws pertaining specifically to deepfakes, so parties would have to find other ways to enforce their rights. One such possible way would be through intellectual property laws. Specifically, parties may be able to rely on copyright law by claiming infringement on the modifications made to the original content. Meanwhile, public figures may also rely on publicity rights since their physical appearance has been misappropriated and likely commercially used, depending on where the deepfake content is uploaded to.

 

Conclusion

The use of deepfake technology is currently gaining traction due to its many useful applications in entertainment and education. Unfortunately, this technology has been exploited by users distorting the original image or video content for pornographic or deceptive purposes. This malicious use of deepfakes has also garnered widespread attention due to its wide-reaching consequences, which include violating personal privacy and inciting political discourse. While many jurisdictions do not have laws that exclusively address problems caused by deepfake content, intellectual property laws such as copyright and publicity rights may offer a temporary solution.

 

Sources:
  1. https://www.ohadf.com/projects/text-based-editing/
  2. https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initiative-to-advance-ai/
  3. Edvinas Meskys, Aidas Liaudanskas, Julija Kalpokiene, Paulius Jurcys, Regulating deepfakes: legal and ethical considerations, Journal of Intellectual Property Law & Practice, Volume 15, Issue 1, January 2020, Pages 24–31, https://doi.org/10.1093/jiplp/jpz167
30% Complete
Asia Regional Director & Global Head of Litigation
+852 3412 4001
Asia Regional Director & Global Head of Litigation
+852 3412 4001