Deepfake technology – AI-generated synthetic audio, images or video – has made producing realistic but fabricated sexual content worryingly easy. When minors are depicted, this intersects with child sexual abuse material (CSAM), raising novel legal issues. In India, the Information Technology Act 2000 (IT Act) and related laws already ban pornographic content, but were framed before “deepfakes” existed. This article examines how Indian law currently addresses CSAM and deepfakes, recent developments (including court rulings), enforcement hurdles (anonymity, jurisdiction, evidence) and proposes reforms.
Current Legal Regime: IT Act and CSAM
Under the IT Act, Section 67B expressly targets online child sexual abuse material: it punishes anyone who “publishes or transmits material in electronic form which depicts children engaged in sexually explicit act or conduct”[1]. A first offence under §67B carries up to 5 years’ imprisonment and ₹10 lakh fine (7 years on repeat conviction)[2]. Notably, “children” is defined as any person under 18[3]. Thus even a self-produced video of one’s own abuse with others is criminalized[4][5]. Section 67A similarly punishes explicit sexual content (not limited to minors), and §67 penalizes “obscene” material generally. For example, media have noted that “Section 67B of the IT Act provides stringent punishment for publishing, transmitting orviewingchild sexual abuse material online.”[5] (the IT Act exception for bona fide literary, scientific or artistic use does not protect pornographic content).
Child pornography is also addressed in other laws. Under the Protection of Children from Sexual Offences Act (POCSO) 2012, Section 15 punishes storage or possession of child pornography. It states that anyone who “stores or possesses” such material and fails to delete or report it can be fined (up to ₹5,000, or ₹10,000 on repeat)[6]. A higher penalty (up to 3 years’ jail) applies if the intent was to “transmit, propagate, display or distribute” the images[7]. Thus POCSO mirrors §§67A/67B: §15(1) covers passive possession, §15(2) covers distribution intent. (POCSO also covers actual sexual offenses against minors, but here we focus on depictions.)
Additionally, general IPC obscenity provisions apply: IPC §292 bans sale/publication of obscene material, and §293 forbids distributing obscene items to any person under 20[8]. The IT Act’s §67 itself is derived from IPC §292 and punishes “obscene” electronic material. However, these IPC sections require intent or effect to corrupt viewers and are rarely used for cyber-crime. In practice CSAM cases invoke IT Act/POCSO more often.
Key Case – Criminalizing Passive Viewing: In Just Rights for Children Alliance v. S. Harish (SC, Sept 2024), the Supreme Court ruled that even “mere viewing” or possessing child pornography is punishable under POCSO and the IT Act[9]. A 2-judge Bench (CJ Chandrachud, J. Pardiwala) held that watching a link to CSAM (even without intent to share it) amounted to “constructive possession” under §15 POCSO[9]. The Court overruled a Madras HC judgment that had held otherwise. The SC stressed that each clause of §15 is an independent offence, and that Section 67B (IT Act) covers not just publication but also “creation, possession, propagation and consumption” of child porn[10]. In short, Indian law now clearly treats passive consumption of CSAM as illegal. (Justice Pardiwala noted that terminology should shift to “child sexual exploitation and abuse material,” recommending Parliament amend POCSO accordingly[11].)
Regulating Deepfakes: Current Law and Gaps
Currently no Indian statute expressly defines or bans “deepfakes.” Instead, existing laws are applied by analogy. The government maintains that the IT Act and related laws are technology-neutral and cover deepfakes. For example, the Ministry of Electronics & IT notes that the IT Act already penalizes identity theft (§66C), impersonation (§66D), privacy violation (§66E) and obscene content (§67, §67A)[12]. A recent parliamentary reply confirmed that the IT Act (and the amended IT Rules 2021) address deepfake harms by covering these offences[12][13].
Platform Liability and Rules: The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 impose due diligence on internet intermediaries (social media, messaging apps, etc.). Intermediaries must not host unlawful content, must remove illegal material expeditiously on notice, and deploy technology-based measures. Notably, the IT Rules explicitly classify as “restricted” any content that “misleads or deceives, including through deepfakes”[13]. This means AI-manipulated fake news or images are officially frowned upon. The Rules also require significant platforms (with large user bases) to trace originators of serious content and use automated detection of unlawful material[14]. In practice, the government has issued advisories urging platforms to tag or label synthetic media and remove non-consensual deepfake content[15][13]. For example, Union ministers met platforms in late 2023, reminding them that “safe harbour” cannot protect them if they fail to remove malicious deepfakes[16].
Data Protection Law: The Digital Personal Data Protection Act 2023 adds another angle. It mandates consent for processing personal data (e.g. a person’s likeness). Deepfakes made from someone’s personal data (e.g. face, voice) without consent could violate DPDP provisions[17]. Violations can incur hefty fines. This offers a privacy-based remedy, though DPDP has general exceptions for law enforcement.
Key Legal Gaps: Despite these provisions, experts note that “Indian cyber laws…offer arguably partial cover” for deepfakes, but no law explicitly mentions them[18]. Courts and police must shoehorn analogies (hence the variety of sections used in the Mandanna case, below). The Supreme Court has cautioned that proactive reform is needed, and the lack of dedicated rules on AI poses challenges.[19]. In short, while acts like cheating by personation (§66D IPC/IT) or defamation may apply to harmful deepfake videos, the law is silent on the technology itself.
Recent Cases and Incidents in India
India has seen a few high-profile deepfake incidents (though so far mostly involving adults). In Jan 2024, Delhi Police arrested a man for creating a viral Rashmika Mandanna deepfake video[20]. He morphed a British-Indian influencer’s video to show Mandanna entering an elevator in a revealing outfit. The FIR invoked IPC forgery sections (465, 469) and IT Act provisions. In the wake of this case the Centre issued advisories to platforms, “stressing the legal provisions covering deepfakes and the potential penalties” for creating or sharing them. IT Minister Ashwini Vaishnaw told NDTV that notices were sent to all major social media firms to “identify deepfakes” and remove them, warning that safe-harbour protections would be lost if companies did not act diligently. This suggests the government treats deepfake dissemination as punishable by existing cyber laws.
On child pornography, beyond the Supreme Court’s landmark ruling above, India has seen disturbing online abuse trends. NCRB data (Crime in India 2020) shows cybercrime cases against children jumped from 305 in 2019 to 1,102 in 2020. These include grooming, CSAM and other exploits. In some sting operations, child predators have even used video chat apps to coerce children to produce nude images. Government initiatives like Project Arachnid and an MoU with the US National Center for Missing & Exploited Children allow India to block access to known CSAM websites. Yet there is no published data on AI-generated child porn specifically, likely because it remains new and underreported.
Enforcement Challenges
Anonymity and Attribution: The very nature of the Internet makes tracing deepfake creators and CSAM distributors difficult. Deepfake perpetrators often operate anonymously behind VPNs, fake accounts or on the “dark web”. By the time police track a viral video to its source, it may have been reshared repeatedly and be hosted on foreign servers. Jurisdictional issues loom large: Indian law has extraterritorial reach for cybercrimes committed via Indian networks, but if the culprit is overseas on a foreign platform, Indian agencies must rely on mutual legal assistance treaties. As one analysis notes, “the original creator … is already lost in the abyss as the video has gone through countless repostings”, often beyond India’s reach[24]. Existing law (IT Act §§75–79) technically allows blocking or takedown across borders, but actual enforcement requires international cooperation which can be slow.
Platform Safe-Harbour: Under IT Act §79, intermediaries enjoy immunity from user content liability if they follow due diligence. This has been criticized in the Mandanna case: Minister Vaishnaw pointed out that safe harbour does not apply if a platform fails to actively remove deepfakes. Still, proving “knowledge” of offending content before it spreads is hard. The law only requires takedown upon government/court orders or user complaints, not proactive filtering (except for child porn, where the Intermediary Guidelines urge proactive ID of CSAM). In practice, enforcement relies on tip-offs (e.g. from NCMEC) and intermediaries’ content moderation policies.
Evidentiary Issues: Proving a deepfake or CSAM offence in court is complex. Investigators need forensic expertise to establish that an image/video was digitally fabricated or stored on a suspect’s device, and to link that material to the accused. Courts have recognized the need for expert evidence. The Supreme Court used forensic analysis of Harish’s phone to find child-porn videos. But as deepfakes become more sophisticated, even experts can be fooled. There is also a risk that an accused might claim “my real video was faked” to escape liability, or conversely that innocent mistakes are deepfakes. Ensuring the integrity of digital evidence (chain of custody of logs, metadata) is crucial. While the IT Act allows government examiners of electronic evidence (Sec. 79A) and the BNS 2023 (new IPC) preserves computer evidence rules, India’s cyber-forensic capacity is still growing. As one analysis notes, “digital evidence integrity is crucial – ensuring logs, device data, etc., are collected and preserved properly to tie a suspect to the creation or dissemination”[28]. Training more forensic experts and speeding up cyber-labs are needed.
Reforming Indian Law: Recommendations
Given these gaps, Indian cyber law needs targeted reform to address CSAM and deepfakes involving minors. Specific proposals include:
1. Explicitly define and ban synthetic CSAM: Amend Section 67B (or insert a new provision) to clearly prohibit sexual depictions of minors—whether real or computer-generated—placing deepfake child pornography on par with conventional CSAM. Adopt Justice Pardiwala’s suggestion from Harish to replace “child pornography” with “child sexual abuse material” (CSAM) to close loopholes where offenders claim “no real child was involved.” Similarly, amend Section 67 or add a new clause to outlaw non-consensual deepfake pornography of any person.
2. Create offences for non-consensual AI pornography: Introduce a specific offence for producing or distributing intimate deepfake content without consent, especially involving minors. Follow global examples: Australia’s Criminal Code criminalises sexual material “materially altered or created by technology” without consent; the UK’s Online Safety Act covers computer-generated sexual imagery.
3. Strengthen intermediary duties and liability: Amend IT Act §79 to mandate proactive detection and blocking of CSAM and deepfake abuse, with loss of safe-harbour protections for non-compliance. Require use of advanced hash-matching and AI tools (e.g., PhotoDNA) for rapid removal, ideally within hours, rather than relying solely on notice-and-takedown. Impose fines for non-removal and mandate user disclosure for AI-generated media in line with DPDP principles.
4. Enhance detection and tracking: Establish a national CSAM hash database, expanding existing initiatives like Project Arachnid to include deepfakes. Mandate upload checks, invest in watermarking/metadata tagging for AI content, and require labelling of AI-generated sexual media. Foster intelligence-sharing with NCMEC and global tech firms.
5. Broaden related law definitions: Update POCSO §15 to explicitly cover any creation method, including AI. Add deepfake-specific references to BNS provisions, e.g., Section 77 (private act capture). Clarify applicability of defamation (§356 BNS) and disinformation (§353 BNS) to deepfake abuse.
6. Improve cross-border enforcement: Amend the IT Act/rules for faster takedowns of foreign-hosted CSAM and deepfakes, expand blocking powers, and strengthen MLAT frameworks. Establish I4C liaison cells with Interpol and foreign cybercrime agencies.
7. Build capacity and awareness: Provide specialised AI and cyber evidence training for police, prosecutors, and judges. Expand I4C’s digital forensics capabilities (§79A IT Act). Promote expert witness use in court. Launch school campaigns on AI-manipulated media risks and CSAM reporting.
8. Introduce civil remedies and victim support: Enable injunctions and damages under personal data or defamation law for deepfake abuse. Amend DPDP to cover misuse of minors’ likenesses. Extend POCSO’s mandatory reporting to include CSAM link-sharing. Strengthen helplines and online complaint portals for deepfake exploitation cases.
These reforms should be grounded in existing statutes and tailored to India’s context. For instance, Parliament might expand Section 69A IT Act (blocking power) to swiftly ban particularly egregious CSAM or coordinated deepfake porn ring sites. It could clarify that encryption cannot shield CSAM distributors (akin to an “anti-encryption” clause for child porn). Any amendments must also safeguard legitimate speech – narrowly focusing on non-consensual or exploitative content.
In sum, while India’s laws can prosecute many deepfake abuses under current provisions the intersection of AI and CSAM demands explicit clarity. As other countries update their laws (Australia’s recent CCA Bill, UK’s Online Safety Act), India should follow suit. Thoughtful legal amendments – supported by technology measures and international cooperation – are essential to protect children from the evolving threats of deepfake pornography and cyber exploitation.
Author Name- Susmit Mukherjee is a IV year B.A.LLB(Hons.) at NALSAR University of Law, Hyderabad, Telangana