THE RISING MENACE OF DEEPFAKES: LEGAL IMPLICATIONS IN INDIA

WHAT ARE DEEPFAKES

The exponential growth in artificial intelligence and machine learning technologies has given rise to deepfakes – hyper-realistic media that falsely depict events that never occurred. Deepfakes leverage powerful AI algorithms like generative adversarial networks to manipulate or generate video and audio content to impersonate real people and make them appear to say or do things they did not actually say or do [1]. From manipulating facial expressions and speech patterns to generating entirely fictional video footage, deepfakes represent an alarming new frontier of disinformation.

India witnessed its first political deepfake during the 2020 Delhi elections, where a fake video portrayed a BJP leader criticizing his opponent [2]. More disturbingly, deepfake pornography depicting celebrities is also on the rise. Such non-consensual use tramples privacy rights and causes reputational damage. Unregulated deepfakes can allow malicious actors to spread misinformation at unprecedented scale – from manipulating stock prices to inciting violence through fake news [3]. The Delhi police has even warned that deepfakes could be used for financial fraud, identity theft, and terrorism [4].

This proliferation of deepfakes in the absence of adequate legal safeguards poses grave threats to privacy, dignity, democratic processes and national security in India [5]. Our current legal framework under the IT Act and IPC lacks sufficient mechanisms to regulate or detect deepfakes. While countries like the US and China have introduced specialized legislation, India is yet to enact a robust, holistic law to address this technological challenge that could upend truth and trust. This blog analyses the ethical, legal, and policy issues surrounding deepfakes in India, and offers recommendations for regulation. As deepfake technology grows more accessible, the need for comprehensive solutions becomes urgent and imperative.

TYPES OF DEEPFAKE AND THEIR USES

Deepfakes leverage AI to fabricate or alter video/audio content. They are used for various purposes, both benign and malicious, inter alia, the following:

Pornography: The most common use of deepfakes is for non-consensual pornographic content. Faces of female celebrities or ordinary women are swapped onto porn stars’ bodies without consent. This violates privacy and causes reputational damage [7]. For instance, an Indian man was arrested in 2019 for making deepfake porn of his girlfriend [8].

Politics: Deepfakes can spread misinformation and propaganda during elections. The Delhi polls saw a BJP leader’s real video altered to show him criticizing opponents [9]. Such fakes can sabotage campaigns and malign candidates. If unchecked, political deepfakes could undermine free and fair elections.

Humor/Parody: Many satirical deepfake videos feature celebrities or politicians. Their face/voice is used in comedic or absurdist situations. While parodying public figures is legal, consent issues arise if private individuals become subjects. Further, humorous fakes can numb audiences to more dangerous uses of this technology [10].

Fraud: AI voice cloning can imitate CEOs or officials to obtain sensitive data. In 2019, this tactic drained €200,000 from a UK energy firm [11]. Deepfakes may also manipulate stock prices by depicting false corporate announcements. Such financial fraud can destabilize markets and entities.

Disinformation: State and non-state actors can weaponize deepfakes to spread fake news at unprecedented scale. For instance, a false video escalated 2018 Middle East tensions by depicting a Bahraini opposition leader conspiring with Qatar [12]. Viral deepfakes could incite violence, public panic, and civil unrest in volatile contexts.

While some benign uses like humour exist, the malicious purposes pose grave threats. Unregulated deepfakes allow impersonation, revenge porn, electoral interference and social manipulation [13]. Limiting principles of legality, proportionality and consent can help distinguish acceptable uses from unethical, harmful ones. Comprehensive legal safeguards are needed to mitigate risks and harms.

ETHICAL ISSUES AND CHALLENGES POSED BY DEEPFAKE

Deepfake technology poses novel ethical dilemmas and challenges that demand urgent solutions, inter alia, the following:

Violations of privacy are a foremost concern. Using someone’s likeness without consent to create fake intimate imagery or videos infringes on their right to privacy under Article 21 of the Indian Constitution [14]. Deepfakes often non-consensually expose people’s private lives by depicting them in compromising situations.

Identity theft enabled by hyper-realistic deepfakes can allow fraudsters to impersonate unsuspecting individuals. Voice cloning to mimic financial executives has already been used for cybercrime [15]. Such breaches of privacy must be addressed to protect citizens.

The viral spread of deepfakes on social media can ruin reputations and lives within minutes. Even if proven false later, the stigma and trauma remain. Laws on defamation, obscenity etc. offer limited redress, highlighting the need for specialized regulations [16].

Electoral deepfakes threaten to become potent disinformation tools. By manipulating candidates’ speeches and appearances, they can covertly sway voter opinions and undermine free choice [17]. Unchecked use in campaigns could destabilize democracies.

Psychological harms like trauma, anxiety and suicidal thoughts are common in victims of deepfake pornography [18]. Young women are disproportionately targeted in such abuse aimed at silencing and subjugating them. Deepfakes facilitate new forms of gender-based violence that evade existing legal remedies.

By making the truth appear unclear and doubtful, deepfakes can create distrust toward media, experts and fact-checking. This compounds the crisis of misinformation, potentially spurring violence, unrest and chaos [19]. A post-truth dystopia may emerge where public discourse lacks shared truths.

LEGAL ISSUES AND CHALLENGES POSED BY DEEPFAKES IN INDIA

Deepfakes exploit gaps in India’s current laws which fail to directly regulate synthetic media. Thus, victims’ remedies to seek legal recourse are limited despite the extensive harm caused by deepfakes which raises severe legal issues and challenges including, inter alia, the following:

The right to privacy under Article 21 is violated by non-consensual use of people’s images for deepfakes [20]. In KS Puttaswamy v. Union of India MANU/SC/0911/2017, the Supreme Court held privacy protects informational autonomy, human dignity, and control over personal data [21]. But without a dedicated law, victims struggle to seek remedies for this infringement.

The Information Technology Act, 2000 partially addresses related issues. Section 66E penalizes capturing/publishing private visuals without consent [22]. But deepfakes may use public, non-intimate images that bypass this narrow provision. Section 67 criminalizes publishing obscene electronic content [23]. However, its ambiguous definition of obscenity makes enforcement inconsistent for pornography deepfakes [24].

Some relief is available under the Indian Penal Code. Sections 499-501 penalize defamation, relevant for deepfakes maligning reputations [25]. Section 505 covers statements conducting public mischief [26], applicable to socially volatile deepfakes. But criminal laws place a high burden of proof on victims, often discouraging complaints. Section 354C penalizes voyeurism and prescribes penalties [27]. But its limited scope fails to encompass the range of privacy violations by deepfakes.

Police warnings have highlighted threats of identity theft, financial fraud and communal disharmony due to deepfakes [28]. But without designated offenses, law enforcement lacks clear legal basis for action. Courts too lack guidelines for treating visual evidence that can be plausibly denied as fake. Deepfakes exacerbate challenges of attributing criminal liability.

GLOBAL PERSPECTIVE AND LEGISLATIONS ON DEEPFAKES

Many countries have pioneered legislation specifically targeting deepfakes as India deliberates a robust framework. The US Malicious Deep Fake Prohibition Act, 2018 would impose fines/imprisonment for deepfakes aiming to defraud, extort, harass or harm reputation [29]. The Deepfakes Accountability Act, 2019 requires labelling manipulated media [30]. States like Virginia and California explicitly banned nonconsensual deepfake pornography [31]. The Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability (DEEPFAKES) Act, 2019 makes companies liable for harmful deepfakes they don’t remove [32].

The European Union does not have a specialized deepfake law yet but regulates them under data protection and AI accountability frameworks. The General Data Protection Regulation mandates consent-based data processing with remedies [33]. The EU’s ethics guidelines for trustworthy AI emphasize transparency and oversight for systems like deepfakes [34].

China requires deepfake producers to declare and label creations as artificial. However, lack of penalties for violations makes this ineffective [35]. Truthfulness and consent are vital principles China aims to incorporate into deepfake governance.

Singapore’s landmark Protection from Online Falsehoods and Manipulation Act 2019 empowers action against inauthentic content, including deepfake videos and images [36]. Fines and prison terms apply for offenders.

India should assimilate constructive aspects of global deepfake laws like consent requirements, mandatory labelling, accountability of platforms and proportional penalties while crafting indigenous regulation aligned with socio-legal realities.

CONCLUSION AND SUGGESTIONS FOR REGULATING DEEPFAKES IN INDIA

Comprehensive regulation of deepfakes in India requires systematic reforms to control this powerful, hazardous technology. A robust, holistic deepfake law must be urgently enacted addressing consent, privacy, redressal, electoral and criminal implications. Mandatory labelling, limitations, and penalties for malicious deepfakes are crucial. Law enforcement, courts, and media need guidelines for authenticating video evidence to avoid miscarriages of justice.

Social media platforms must be obligated to swiftly detect and remove harmful deepfakes. They could deploy blockchain-based digital fingerprinting to verify media authenticity. However, excessive liabilities may incentivize over-censorship, requiring safeguards.

Public awareness campaigns by government and media are vital to inoculate citizens against deepfake risks. Media literacy education especially for youth can create informed, critical digital citizens. Fact-checking networks and cybersecurity need enhancement to counter disinformation threats.

An agile, evolving regulatory approach is vital for this complex issue involving law, ethics, and technology. With conscious forethought and stakeholder collaboration, India can lead the way in responsibly optimizing deepfakes’ benefits while mitigating their perils.

REFERENCES / ENDNOTES

[1] Chesney, Robert and Citron, Danielle, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security (July 14, 2018). 107 California Law Review 1753 (2019), U of Texas Law, Public Law Research Paper No. 692, Available at SSRN: https://ssrn.com/abstract=3213954

[2] Dasgupta, Binayak, ‘BJP’s deepfake videos trigger new worry over AI use in political campaigns’ Hindustan Times (New Delhi, 20 February 2020) https://www.hindustantimes.com/india-news/bjp-s-deepfake-videos-trigger-new-worry-over-ai-use-in-political-campaigns/story-6WPlFtMAOaepkwdybm8b1O.html accessed 15 September 2020.

[3] Porup, JM, How and why deepfake videos work — and what is at risk, CSO Online, (April 10, 2019), https://www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html

[4] PTI, Be aware of fake news, morphed videos on social media: Delhi police advisory, The Hindu, (Jan 27, 2020), https://www.thehindu.com/news/cities/Delhi/be-aware-of-fake-news-morphed-videos-on-social-media-delhi-police-advisory/article30663130.ece

[5] Jain, Simran and Jha, Piyush, Deepfakes in India: Regulation and Privacy, South Asia @ London School of Economics, (May 21, 2020). https://blogs.lse.ac.uk/southasia/2020/05/21/deepfakes-in-india-regulation-and-privacy

[7] Chesney, Robert and Citron, Danielle, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security (July 14, 2018). 107 California Law Review 1753 (2019), U of Texas Law, Public Law Research Paper No. 692, Available at SSRN: https://ssrn.com/abstract=3213954

[8] McGlynn, Clare; Rackley, Erika; Houghton, Ruth, Beyond ‘Revenge Porn’: Image-Based Sexual Abuse and the Continuum of Harms, Feminist Legal Studies, 25(1), 25-46 (2017).

[9] Tyagi, Parth and Bhatnagar, Achyutam, Deepfakes and the Indian legal landscape, Inforrm Blog (July 3, 2020), https://inforrm.org/2020/07/03/deepfakes-and-the-indian-legal-landscape-parth-tyagi-and-achyutam-bhatnagar/

[10] Paris, Britt and Donovan, Joan, Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence, Data and Society Research Institute (2019). 

[11] Stupp, Catherine, Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case, The Wall Street Journal (Aug 30, 2019), https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402

[12] Calamur, Krishnadev, Did Russian Hackers Target Qatar?, The Atlantic (June 7, 2017), https://www.theatlantic.com/news/archive/2017/06/qatar-russian-hacker-fake-news/529359/

[13] Chesney, Robert and Citron, Danielle, 21st Century-Style Truth Decay: Deep Fakes and the Challenge for Privacy, Free Expression, and National Security, 78 MD. L. REV. 882 (2018).

[14] K.S. Puttaswamy v. Union of India, MANU/SC/0911/2017.

[15] Stupp, Catherine, Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case, The Wall Street Journal (Aug 30, 2019), https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402

[16] Jain, Piyush and Jha, Simran, Deepfakes in India: Regulation and Privacy, South Asia @ LSE (May 21, 2020), https://blogs.lse.ac.uk/southasia/2020/05/21/deepfakes-in-india-regulation-and-privacy/

[17] Guera, David and Delp, Edward, Deepfake Video Detection Using Recurrent Neural Networks, 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (2018).

[18] Bajaj, Vikas, ‘Revenge Porn’ Crime Is Escalating Dramatically in India, The New York Times (Nov 9, 2019), https://www.nytimes.com/2019/11/09/world/asia/india-revenge-porn.html

[19] Chadwick, Paul, The Liar’s Dividend, and Other Challenges of Deep-Fake News, The Guardian (July 22, 2018), https://www.theguardian.com/commentisfree/2018/jul/22/deep-fake-news-donald-trump-vladimir-putin

[20] K.S. Puttaswamy v. Union of India, MANU/SC/0911/2017.

[21] Ibid.

[22] The Information Technology Act, 2000, Section 66E. 

[23] The Information Technology Act, 2000, Section 67.

[24] Narrain, Siddharth, The Information Technology Act: A User’s Guide to India’s Digital Legislation, Point of View, Issue XXXIX (2020).

[25] The Indian Penal Code, 1860, Sections 499-501.

[26] The Indian Penal Code, 1860, Section 505.

[27] The Indian Penal Code, 1860, Section 354C.

[28] The Hindu, Be aware of fake news, morphed videos on social media: Delhi Police advisory (Jan 27, 2020), https://www.thehindu.com/news/cities/Delhi/be-aware-of-fake-news-morphed-videos-on-social-media-delhi-police-advisory/article30663130.ece 

[29] Malicious Deep Fake Prohibition Act, 2018.

[30] DEEPFAKES Accountability Act, 2019.

[31] Narayanan, Nilesh, How US states are tackling deepfakes, Analytics India (July 20, 2020), https://analyticsindiamag.com/how-us-states-are-tackling-deepfakes/

[32] DEEPFAKES Accountability Act, 2019.

[34] Regulation (EU) 2016/679 (General Data Protection Regulation). 

[35] European Commission, Ethics guidelines for trustworthy AI (Apr 8, 2019), https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

[36] Liu, Min and Zhang, Xijin, Deepfake Technology and Current Legal Status of It, AHFE 2022 (July 2022), https://doi.org/10.2991/ahfe.2022.194

[37] Protection from Online Falsehoods and Manipulation Act, 2019.


[1] Author is a Fourth Year Law Student pursuing B.A.LL.B. Hons. at University Five Year Law College, University of Rajasthan, Jaipur and is a Student Editor at LawFoyer International Journal of Doctrinal Legal Research [ISSN:2583-7753] .

Leave a Reply