DEEPFAKES: ETHICAL AND LEGAL IMPLICATION

Author: KHUSHBOO BHARTI, INSTITUTE OF LAW,JIWAJI UNIVERSITY,GWALIOR

Edited By: Ritesh Singh Shekhawat, MJRPU, Jaipur

ABSTRACT

When adult actors’ faces were uploaded on Reddit in place of well-known Hollywood names in 2017, the first deepfake movies became viral. In 2018, comedian Jordan Peele poked fun at technology and issued a warning to his audience in a deepfake video. The film included former President Obama. Due to the increasing usage of deepfakes, the U.S. House Intelligence Committee convened hearings in 2019 regarding potential threats to national security. Unfortunately, deepfakes have subsequently evolved into more intricate and difficult-to-identify methods.

Over the past five years, their usage has significantly expanded due to the surge in deepfake applications. These manipulations are frequently used to disseminate misleading information and cast doubt on significant issues involving public and private entities, as well as to harass, threaten, and defame individuals. Moreover, deepfakes may violate intellectual property rights by unauthorized use of particular words, symbols, or trademarks. They may also seriously violate copyright, privacy, and data protection laws, in addition to violating human rights.

Despite some governments implementing AI regulations, many hesitate due to concerns about free speech. Online platforms like YouTube have established legal frameworks to manage content, but the process remains complex and expensive. Deepfakes pose a serious problem since they can frequently be mistaken for authentic recordings. As a result, both public and commercial organizations have been forced to create tools and policies for managing and identifying them. This article explores the obligations and the moral and legal ramifications of deepfake technology.

KEYWORDS

Deeplearning, deepfakes, media manipulation, obstacles, deception, misappropriation, deceit, infringement.
INTRODUCTION[1]
“Deep learning” and “fake” are combined to form the word “deepfake.” It describes a category of synthetic media in which real information is mimicked through the manipulation of audio or video. Synthetic media involves the creation, editing, and modification of data using automated methods, primarily through artificial intelligence algorithms.
Media manipulation is not a new phenomenon and has been utilized across various historical periods with differing degrees of success.

It has been employed in political propaganda and blockbuster movies featuring special effects. While the advent of deepfake technology has opened up numerous avenues for marketers, it brings with it considerable ethical dilemmas that must be tackled to preserve consumer trust.

Corporations must delve into the prospects of immersive content, all the while managing concerns related to privacy, deceit, and manipulation

OBSTACLES[2]
Although synthetic media isn’t a recent development, advancements in technology have introduced new challenges. Creating deepfakes has become remarkably easy. The democratization of the internet, coupled with significant advancements in AI algorithms, allows individuals without expertise to alter and edit media. They can then rapidly disseminate this media on a large scale via social media and the internet.

Current detection methods are inadequate to meet the challenges presented by deepfake technology. Deepfakes can be profitable and useful, but they can also be misused and have dangerous results. As of now, neither the EU nor the UK has a complete legal framework that would effectively govern deepfakes to safeguard people or businesses.

ETHICAL DILEMMAS SURROUNDING DEEP FAKE TECHNOLOGY AND ITS REPERCUSSIONS FOR ENTERPRISES[3]

As AI technology continues to evolve, deepfakes have emerged prominently in the realm of business promotion. Artificially generated visual and audio content has enhanced the way companies tailor their marketing strategies to align with consumer inclinations.

Despite deepfakes serving as effective promotional instruments for enterprises, they raise significant ethical issues tied to privacy breaches and the lack of consent. Some of these ethical dilemmas and their repercussions for businesses include:

Misinformation

Deepfake technology possesses the capability to fabricate deceptive marketing clips and audio about companies, which are incredibly persuasive but inherently misleading.

Such misinformation distorts reality, potentially steering people off course and compelling them to make decisions they might rue in the future. This phenomenon sparks serious apprehension about the degradation of trust between businesses and their clientele.

Navigating Legal and Regulatory Obstacles

The advent of deepfake technology in advertising has ushered in a myriad of regulatory and legal obstacles. There is a pressing need for the government to roll out fresh legislation specifically aimed at deepfake content. This urgency stems from the inadequacy of current laws and frameworks to effectively manage the spread and creation of deepfakes, resulting in significant loopholes in accountability and enforcement.

Infringement of personal privacy and consent

The utilization of deepfakes gives rise to significant ethical quandaries, as it entails the manipulation of an individual’s voice or likeness for various nefarious intentions, such as tarnishing reputations or disseminating misleading information without their permission, thereby breaching their privacy.

Shaping public sentiment
Within the realm of marketing, fabricating or disseminating misleading content can steer a consumer’s views or sway their buying choices by circulating deceptive news, endorsements, or reviews.

Deception and Identity Misappropriation

Deepfake technology can be misused by various nefarious individuals to mimic others using their likenesses, vocal patterns, or identities. Swindlers may exploit this vital personal information to engage in deceitful acts. Given that the majority of individuals and enterprises are not equipped to detect deepfakes, they may inadvertently succumb to schemes that lead to tarnished reputations and monetary setbacks.

HOW DOES EVERYTHING FUNCTION

Deep learning algorithms are employed to create deepfakes. Deep learning, a subset of artificial intelligence, mimics the way the human brain processes data, enabling it to learn independently through examples rather than direct human instruction.
Specifically, synthetic media and deepfakes utilize Generative Adversarial Networks (GANs), which consist of two competing neural networks to produce high-quality fake content. The network comprises three components: data from the physical world, a discriminator, and a generator.

The discriminator network is trained with real-world data to ascertain if the generator is producing genuine or artificial content. The generator commonly creates text, images, or video content. It starts with random information and, as the name suggests, generates increasingly higher-quality samples to convince the discriminator that the sample is authentic real-life data. Initially, the generator network may be far off the mark. Its predictions might begin as random or unclear, akin to static or noise, but it improves with practice.

This is achieved by continuously refining both the discriminator and generator components of the network. These components compete to create replicas that closely resemble the authentic item. In what locations are deepfakes being observed? The use of deepfakes is becoming more widespread. Sensity, a company specializing in visual threat intelligence, identified 14,678 deepfakes on the internet in July 2019. By June 2020, the number had surged to 49,081, marking a 330% increase.

The volume of deepfakes discovered online is doubling approximately every six months, showcasing its exponential growth. An increase in deepfake creations might lead people to question the authenticity of genuine videos, as it becomes easier for someone featured in a compromising video to claim it was a deepfake. This phenomenon is known as ‘the liar’s dividend’. As awareness of deepfakes grows, people will become more skeptical of videos in general, making it easier to dismiss real videos as fake.

The AI technology used to create deepfakes and synthetic media is still relatively new, but it has already progressed enough to produce flawless fake images, with video and audio manipulation capabilities soon to follow. Some well-known deepfakes that have garnered widespread attention include a video of ‘President Obama’ delivering a public address and Jim Carrey mimicking Jack Nicholson in The Shining.

THE POSITION IN LEGAL TERMS

The current technology in this field has outpaced the law, creating a need to fill regulatory gaps. Nevertheless, deepfake technology also has positive practical applications. Its use is expected to bring positive changes to various business sectors, including banking, where AI chatbots can offer realistic customer service, reducing the need for human interaction. In the field of Accessibility, technology is anticipated to assist disabled individuals in enhancing their capabilities and regaining independence and autonomy. For instance, individuals with ALS can preserve their voices before losing the ability to speak and later use AI technology to digitally replicate their voices.

VIEW FROM THAMMASAT UNIVERSITY IN THAILAND

Intellectual Property Rights

Currently, there is no comprehensive legal framework dedicated to addressing deepfakes in the UK. However, multiple legal recourses are available. For instance, someone harmed could attempt to get deepfakes removed from social media sites by obtaining a court order based on copyright infringement.

This may be challenging to prove due to the various rights holders involved and will vary depending on the specific content used in the deepfake and whether it constitutes copying a significant portion of the copyrighted work. Additionally, deepfakes could potentially qualify for exceptions under the Copyright, Designs, and Patents Act 1998 (CDPA). It appears that the UK copyright system is not adequately equipped to handle deepfakes. However, regulators and legislators are working towards addressing this issue. For example, WIPO recently released ‘The Updated Paper on Intellectual Property Policy and Artificial Intelligence’.

The paper raised questions about whether the copyright system is suitable for regulating deepfakes or if a new audiovisual framework is needed. WIPO also expressed concerns about copyright ownership and fair compensation for individuals whose likenesses and performances are used in deepfakes.

Deceptive imitation and misrepresentation. Image rights are not formally recognized in the UK, but English legal precedent has evolved to offer protection in cases of commercial misappropriation of an individual’s image. In the case of Fenty v Arcadia Group, UK retailer Topshop sold a t-shirt featuring singer Rihanna’s image without her permission. Consequently, Rihanna pursued legal action with a ‘passing-off’ claim in the UK High Court.

The Court ruled that many customers would mistakenly believe Rihanna had endorsed the t-shirt, leading them to purchase it for that reason, which could damage her reputation. The High Court’s decision was upheld by the UK Court of Appeal, with the appeal being unanimously dismissed. Although Rihanna won the case, the Court clarified that merely using a person’s image on clothing is not automatically misleading and that celebrities do not have absolute control over the use of their image under English law.

It appears that the UK copyright system is not adequately equipped to handle deepfakes. However, regulators and legislators are working towards addressing this issue. For example, WIPO recently released ‘The Updated Paper on Intellectual Property Policy and Artificial Intelligence’. The paper raised questions about whether the copyright system is suitable for regulating deepfakes or if a new audiovisual framework is needed. WIPO also expressed concerns about copyright ownership and fair compensation for individuals whose likenesses and performances are used in deepfakes.

DECEPTIVE IMITATION AND MISREPRESENTATION[4]

Although picture rights are not officially recognized in the UK, examples of commercial theft of an individual’s image have been protected by English judicial precedent.
The UK retailer Topshop sold a t-shirt with singer Rihanna’s likeness on it without authorization in the Fenty v. Arcadia Group case. As a result, Rihanna filed a ‘passing-off’ lawsuit in the United Kingdom High Court. The Court decided that many consumers may buy the t-shirt under the false impression that Rihanna had approved it, harming her reputation.

The UK Court of Appeal denied the appeal unanimously, upholding the High Court’s verdict. Despite Rihanna’s victory, the Court made it clear that just placing someone’s picture on apparel is not automatically deceptive and that, according to English law, celebrities do not have complete control over how their image is used.

Given this information, public figures may not always succeed in relying on a passing-off claim, and the claim is unlikely to be feasible for individuals who are not well-known or whose image has not been used commercially before. These limitations could pose a challenge in situations involving deepfakes featuring non-celebrities or individuals not associated with endorsing or promoting a product or service.

DEFAMATION OF CHARACTER

A victim of a deepfake may be able to file a claim for protection under defamation laws if it can be demonstrated that the deepfake has caused or is likely to cause them serious injury in terms of their reputation. The Defamation Act 2013 consolidated and simplified many existing laws and rulings in this area, and importantly, set a new standard for filing a defamation lawsuit.

Under this updated standard, a harmed person must demonstrate that a deepfake has resulted in significant damage to their reputation or has a high likelihood of causing such harm to be deemed defamatory. The Supreme Court ruled in the 2019 case Lachaux v Independent Print Ltd & Ors that the Defamation Act 2013 raised the threshold for seriousness needed to file a claim, and meeting the ‘serious harm’ criteria depends on the real impact of the perpetrator’s actions.

While the intention behind setting a higher threshold was to deter frivolous claims, it may also unintentionally restrict the options for victims of deep fake crime-seeking remedies. It is unclear what constitutes ‘serious reputational harm’ about deepfakes, and this ambiguity could hinder individuals seeking legal redress under the Act.

EUROPEAN UNION VIEWPOINT

Currently, like the UK, there are no specific European laws addressing deepfake-related issues. However, a broader initiative is underway to tackle misinformation across Europe, encompassing deepfakes and synthetic media.

In 2018, the European Commission introduced ‘Codes of Practice on Disinformation’ to regulate false information online. The Code sets rules for its signatories, including transparency in political ads, shutting down fake accounts, and not monetizing misinformation. Facebook, Google, and Twitter are among the companies that have signed the Code. The Commission has proposed initiatives to enhance media literacy among EU citizens and urged the establishment of a European group of fact-checkers to promote quality journalism and understand misinformation sources and methods.

The European Parliament has acknowledged the unique challenge posed by deepfakes and suggested using AI to address it by requiring all deepfake content to disclose its lack of authenticity within the Commission’s ethical guidelines.

POSSIBLE WAYS TO SOLVE A PROBLEM[5]

Deepfakes present a particularly complex issue. Upcoming laws like the Online Harms Bill appear likely to be overly broad in addressing deepfakes and their intricate ethical dilemmas. Regulatory gaps related to online platforms are evident, and the presence of deepfakes highlights these gaps. In the absence of dedicated legislation addressing this particular issue, what alternative approaches can be pursued to mitigate the adverse effects of synthetic media?

One suggestion is to establish a new ‘Office for Digital Society’ to oversee online content, data, and privacy. A centralized regulator could unite current regulators and reduce existing regulatory gaps. For any effective regulation to be implemented, regulators must be equipped with authority and sufficient resources to make a difference.

In the future, laws focused on deepfake technology in the UK and the EU should specify the approved and prohibited uses of deepfakes. This will provide social media companies with clear guidelines for monitoring content on their platforms. Legislation should also enable internet platforms to share deepfake information among themselves. This will simplify the process of platforms alerting each other about harmful content and is expected to reduce the spread of synthetic media in mainstream media. Alongside potential legal reforms, governments need to allocate resources to develop forensic media technologies to facilitate the identification of deepfakes.

Both the UK and the EU must be prepared to legislate effectively and with purpose in this field. It is encouraging to see broader efforts to combat misinformation and safeguard online users. Nevertheless, with the increasing spread of false information showing no signs of abating, deepfakes and synthetic media are poised to become the predominant issue.

REFERENCES

[1] The Ethics of #Deepfakes: Understanding the Impact of Generative AI on #Society #genai #innovation #technology #scams

https://medium.com/@rickspair/the-ethics-of-deepfakes-understanding-the-impact-of-generative-ai-on-society-genai-innovation-ceeaf2c95be6

[2] Regulating deep fakes: legal and ethical considerations

https://www.researchgate.net/publication/345383883_Regulating_deep_fakes_legal_and_ethical_considerations

[3] Are Deepfakes Illegal? Overview Of Deepfake Laws And Regulations

https://hyperverge.co/blog/are-deepfakes-illegal/

[4] Emerging Technologies and Law: Legal Status of Tackling Crimes Relating to Deepfakes in India…
https://www.scconline.com/blog/post/2023/03/17/emerging-technologies-and-law-legal-status-of-tackling-crimes-relating-to-deepfakes-in-india/

[5] Debating the ethics of deepfakeshttps://www.orfonline.org/expert-speak/debating-the-ethics-of-deepfakes

 

Share this :
Facebook
Twitter
LinkedIn
WhatsApp