Deep Fake Technology in India: Legal Framework for Misinformation and Image Rights

POSTED ON OCTOBER 06, 2025 BY DATA SECURE
fine

Introduction

Deepfakes are a form of synthetic media created by digitally manipulating videos, images, or audio to replace one person’s likeness with that of another. This technology relies on deep learning, a branch of artificial intelligence, to produce highly realistic and convincing content. Deepfake technology is part of a broader field known as deep synthesis, which uses techniques such as deep learning and augmented reality to generate text, visuals, audio, and video for creating virtual or simulated scenes. While deep synthesis can be applied to innovative and creative purposes, it is often misused. Deepfakes have been exploited to spread misinformation, fabricate fake news, impersonate individuals for financial fraud, and even create deceptive political content. Unlike traditional photo or video editing methods, deepfakes take advantage of advanced machine learning algorithms to generate content that appears strikingly authentic. This ability to overlay digital composites onto existing videos, photos, or audio makes deepfakes a powerful tool both for innovation and for misuse. State of Deepfakes report 2023 recorded 95,820 deepfake videos online, a 550% rise since 2019, with one-third of tools enabling deepfake pornography. The Identity Fraud Report 2025 revealed that in 2024, one deepfake attempt occurred every five minutes, accounting for 40% of all biometric fraud. Between 2022 and 2023, deepfakes surged by 3,000%, peaking in June 2023.

Image Rights and Privacy Gaps:

fine

Image rights and privacy gaps in India related to deepfake technology highlight significant challenges in legal protection and enforcement, especially as synthetic media increasingly infringes on individuals’ personal likeness and dignity. India currently lacks a standalone statutory framework explicitly recognizing or protecting image rights, often referred to as publicity or personality rights. These rights encompass an individual’s control over the commercial and personal use of their name, likeness, voice, and other identifiable traits. While Indian courts have interpreted privacy rights under Article 21 of the Constitution and tort principles to cover misuse of images and identity, this protection remains piecemeal and largely reactive rather than preventive.

Deepfakes complicate this further by enabling realistic but fabricated videos, photos, or audio clips that can portray individuals in contexts they never consented to, often for malicious purposes such as defamation, cyberbullying, or non-consensual sexual content. Despite provisions like the Information Technology Act’s Section 66E (violation of privacy), these laws are not specifically tailored to address AI-generated or manipulated content, leaving victims with limited legal recourse and slow enforcement processes. Moreover, intellectual property laws under the Copyright Act provide limited protection since a person’s image or likeness is not generally subject to copyright. The absence of a clear legal regime on consent, ownership, and commercialization of personal attributes means deepfake creators exploit these gaps, particularly on social media platforms where policing is difficult.

Enforcement challenges also arise from lack of technical expertise in cyber forensic investigations to trace and attribute deepfake content to perpetrators. Victims face hurdles in timely takedown of harmful content and obtaining meaningful compensation or justice. The transient and viral nature of deepfake content further complicates containment efforts.

Growing judicial awareness has led to some injunctions and guidelines emphasizing privacy and dignity protections, but comprehensive and proactive legislation is still awaited. Experts and policymakers increasingly advocate for a dedicated legal framework to explicitly codify image rights in the digital age, incorporate AI-generated content under privacy laws, and mandate stricter liability for intermediaries hosting such content. In india, image rights and privacy face substantial gaps amidst the advance of deepfake technology, with urgent need for legislative reform, enhanced enforcement capacities, and platform accountability to safeguard individuals’ digital identities and personal dignity against synthetic media threats.

Tackling Misinformation via Deepfakes:

fine

Tackling misinformation via deepfakes in India involves a multi-layered legal and regulatory approach centred on existing laws, platform accountability, and public awareness. The Government of India recognizes deepfakes as a serious threat that can damage individual dignity, reputation, and privacy, and undermine public trust by spreading false or misleading content. The core legal framework addressing deepfake misinformation includes the Information Technology Act, 2000, which covers offences such as identity theft (Section 66C), impersonation (Section 66D), privacy violations (Section 66E), and transmission of obscene content (Sections 67, 67A). The Act also grants powers to issue blocking and removal orders to intermediaries to restrict unlawful digital content (Sections 69A, 79).

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, along with their amendments in 2022 and 2023, impose due diligence obligations on platforms and intermediaries. They are mandated to identify and remove misinformation, including deepfake content, promptly, sexual or explicit deepfakes must be taken down within 24 hours of a complaint, while other deepfake misinformation must be removed within 36 hours. Platforms must also inform users about the consequences of sharing unlawful content and report violations to law enforcement as required. Failure to comply risks losing legal safe harbour protections. Beyond legal mandates, government advisories instruct intermediaries to actively monitor and disable misleading deepfake content, ensure such content is labelled appropriately, and raise user awareness that AI-generated content may be unreliable or deceptive. These measures aim to enhance platform accountability while balancing user privacy concerns.

India’s cybercrime agencies and coordination centres, like CERT-In and the Indian Cyber Crime Coordination Centre (I4C), alongside reporting portals and grievance appellate committees, form an ecosystem that facilitates reporting, investigation, and removal of harmful deepfake misinformation. Public awareness campaigns are also regularly conducted to educate citizens about cyber threats including synthetic media risks. Collectively, these legal instruments, regulatory frameworks, platform responsibilities, and institutional mechanisms enable India to take a robust stance against misinformation spread by deepfakes, although calls for more specific legislation and technical capacity building continue.

Legal Framework for Deep Fakes:

fine

In India, while existing laws cover areas such as cybercrime, defamation, and data protection, there is currently no dedicated legal framework to specifically address the challenges posed by deepfakes. The increasing misuse of AI generated misinformation, identity theft, and non-consensual deepfake content underscores the urgent need for clear, comprehensive, and specialized legal measures to regulate and mitigate their harmful impact.

As, India does not have specific laws or regulations that directly govern the use of deepfake technology. However, certain provisions under existing laws may be applied to address some of its harmful uses. The Information Technology Act, 2000 includes limited safeguards. Section 67 prohibits the publication or transmission of obscene material in electronic form, while Section 67A deals with sexually explicit content. These provisions could potentially be invoked in the context of non-consensual deepfake pornography. Similarly, under BNS section 356 of the Bharatiya Nyaya Sanhita, 2023 prescribes for Punishment for Defamation, which may be relevant when deepfakes are created to harm an individual’s reputation.

In addition, the Digital Personal Data Protection (DPDP) Act, 2023 provides certain safeguards against the misuse of personal data. However, the law has not yet been implemented, and more importantly, it does not explicitly address the challenges posed by deepfake technology. This leaves significant legal and regulatory gaps in comprehensively tackling the misuse and risks associated with deepfakes. Given the serious risks deepfakes pose, including violations of privacy, reputational damage, threats to social harmony, risks to national security, and challenges to democratic processes, India urgently needs to develop a comprehensive and dedicated legal framework. Such a framework should specifically target the misuse of deepfake technology while balancing innovation with accountability and protection of citizens’ rights.

Way forward and Recommendations:

India, in addressing the challenges posed by deepfake technology, must take a comprehensive, multi-stakeholder approach combining legal reform, enhanced enforcement, judicial clarity, and public awareness. First, India must enact dedicated legislation explicitly regulating the creation, distribution, and malicious use of deepfakes. Such a law should define and criminalise harmful acts involving AI-generated synthetic media, establish clear consent requirements for likeness usage, and set stringent penalties. This will close the gaps left by the existing fragmented framework of the IT Act, BNS, and copyright laws. The Delhi High Court’s injunctions in cases like TV Today Network vs. YouTube Deepfake of Anjana Om Kashyap (2025) and Anil Kapoor vs. Simply Life (2024) reflect judicial support for protecting image rights and controlling digital impersonation but highlight the need for statutory backing.

Enforcement mechanisms should be strengthened by improving cyber forensic infrastructure and training dedicated law enforcement units specialising in AI-driven crimes. Collaborations between government agencies, technology platforms, and AI researchers can aid in refining deepfake detection and rapid content takedown processes. The establishment of reporting and redressal frameworks, as evidenced through initiatives by CERT-In and the Indian Cyber Crime Coordination Centre, is positive but requires expansion. Platform liability must be enhanced to ensure intermediaries adhere to due diligence in monitoring, labelling, and promptly removing harmful synthetic content. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, have started this process but need continuous updating in line with AI advancements and court rulings like the Delhi High Court’s active stance on takedowns.

Furthermore, judicial precedents such as protections for public figures like Amitabh Bachchan and landmark principles from cases like Subramanian Swamy vs. Union of India (2016), affirming defamation laws in digital contexts, provide a foundation for courts to interpret and expand image and privacy protections related to deepfakes. These rulings emphasise the constitutional right to privacy and dignity under Article 21, which courts can invoke against synthetic media abuse. Additionally, public awareness and digital literacy campaigns are crucial in educating users about the risks of deepfakes, promoting vigilance, and mitigating the spread of misinformation. Learning from global best practices, including regulatory models like the UK Online Safety Act and the EU AI Act, can help India craft balanced policies that protect citizens without stifling innovation.

Through robust legislation, improved enforcement, clear judicial guidelines, and informed citizenry, India can build a resilient ecosystem to safeguard its digital space from the harms of deepfake technology while embracing AI’s positive potential.

Conclusion

Deepfake technology poses profound challenges to India’s legal, social, and democratic fabric by enabling the creation and spread of highly realistic synthetic media that can infringe privacy, defame individuals, and propagate misinformation. While existing laws such as the Information Technology Act, Bharatiya Nyaya Sanhita, and copyright statutes provide partial protection, they are fragmented and insufficient to fully address the complexities introduced by AI-driven content manipulation. Judicial interventions, such as the Delhi High Court’s injunctions in cases like TV Today Network vs. YouTube Deepfake of Anjana Om Kashyap and Anil Kapoor’s case protecting personality rights, demonstrate the courts’ willingness to adapt existing legal concepts to novel digital harms. However, these precedents, though important, underscore the urgent need for a comprehensive legislative framework specific to deepfakes.

We at Data Secure (Data Privacy Automation Solution) DATA SECURE - Data Privacy Automation Solution  can help you to understand EU GDPR and its ramificationsand design a solution to meet compliance and the regulatoryframework of EU GDPR and avoid potentially costly fines.

We can design and implement RoPA, DPIA and PIA assessments for meeting compliance and mitigating risks as per the requirement of legal and regulatory frameworks on privacy regulations across the globe especially conforming to GDPR, UK DPA 2018, CCPA, India Digital Personal Data Protection Act 2023. For more details, kindly visit DPO India – Your outsourced DPO Partner in 2025 (dpo-india.com).

For any demo/presentation of solutions on Data Privacy and Privacy Management as per EU GDPR, CCPA, CPRA or India DPDP Act 2023 and Secure Email transmission, kindly write to us at info@datasecure.ind.in or dpo@dpo-india.com.

For downloading the various Global Privacy Laws kindly visit the Resources page of DPO India - Your Outsourced DPO Partner in 2025

We serve as a comprehensive resource on the Digital Personal Data Protection Act, 2023 (Digital Personal Data Protection Act 2023 & Draft DPDP Rules 2025), India's landmark legislation on digital personal data protection. It provides access to the full text of the Act, the Draft DPDP Rules 2025, and detailed breakdowns of each chapter, covering topics such as data fiduciary obligations, rights of data principals, and the establishment of the Data Protection Board of India. For more details, kindly visit DPDP Act 2023 – Digital Personal Data Protection Act 2023 & Draft DPDP Rules 2025

We provide in-depth solutions and content on AI Risk Assessment and compliance, privacy regulations, and emerging industry trends. Our goal is to establish a credible platform that keeps businesses and professionals informed while also paving the way for future services in AI and privacy assessments. To Know More, Kindly Visit – AI Nexus Your Trusted Partner in AI Risk Assessment and Privacy Compliance|AI-Nexus