Supreme Court Declines PIL on Deepfake Regulation, Directs Petitioner to Approach Delhi High Court
- Chintan Shah
- 2 days ago
- 4 min read
The Supreme Court of India on Monday dismissed a Public Interest Litigation (PIL) seeking urgent regulation of AI-generated deepfakes, advising the petitioner to take the matter to the Delhi High Court, which is already hearing related cases.
A Bench comprising Justices Surya Kant and N. Kotiswar Singh refused to entertain the petition filed by Advocate Narendra Kumar Goswami, who had urged the Court to constitute an expert committee under its supervision to draft a model law for AI regulation. The petitioner had raised concerns about the unregulated proliferation of deepfakes and the inaction of the authorities in curbing such content.
The Court remarked that it did not see the need for parallel proceedings, given that the Delhi High Court has already been examining the issue over the past few years and issuing interim orders. “We do not deem it necessary to entertain these parallel proceedings,” the bench observed. “The petitioner is granted liberty to intervene in the pending matters before the Delhi High Court and offer assistance. We request the High Court to consider his suggestions,” the Court added.
During the hearing, Advocate Goswami cited the circulation of a deepfake video involving Colonel Sofiya Qureshi, the spokesperson for India’s 'Operation Sindoor', as an example of the growing threat. He also mentioned that the government had earlier promised to bring in legislation to regulate deepfakes.
In response, Justice Surya Kant commented, “You are a member of the Bar Council, aren't you? It seems like your concern is more about addressing the media than solving the issue.” However, he clarified that the Court was not dismissing the seriousness of the issue itself. Instead, it was suggesting that appropriate redressal may be sought in the High Court. “The High Court has already taken substantial steps. If you’re not satisfied with their direction or the government's stand, you can always come back to us,” he said.
Justice Kant also underscored the urgency of addressing cyber threats: “These cyber criminals are so quick—they could generate another deepfake before you leave this courtroom. Something serious must be done. Go to the High Court, maybe your suggestions will help.”
In his petition, Advocate Goswami had requested the Supreme Court to direct the Ministry of Electronics and Information Technology to frame rules under the Information Technology Act, 2000.
His proposed measures included:
Mandatory Watermarking of AI Content: Requiring all AI-generated visuals, audio, and video to be tagged with metadata revealing origin, tools used, and creator information—similar to China’s Deep Synthesis Provisions.
Takedown Mechanism: A 24-hour takedown process for deepfake content, modelled on Rule 3(1)(b)(vii) of the IT Rules, 2021, along with penalties under Section 45 of the IT Act for non-compliance.
Algorithmic Audits: Quarterly audits of AI platforms by CERT-In approved auditors (CIAD-2024-0060).
AI Regulatory Authority: A regulatory body under Section 88 of the IT Act, chaired by a retired Supreme Court judge, with representation from NITI Aayog, CERT-In, and academia.
The petitioner also sought action from the Election Commission of India (ECI), including:
Setting up a Deepfake Monitoring Cell under Article 324 of the Constitution;
Pre-certification of political advertisements using AI tools;
Real-time takedown authority under Rule 16 of the Conduct of Elections Rules, 1961;
Creating a public database of debunked deepfakes;
Amending the Model Code of Conduct to prohibit undisclosed AI-generated political content during elections, with violations punishable under Section 171G of the IPC.
Further, the petition urged the Ministry of Home Affairs to:
Develop a National Protocol on AI threats within 90 days, invoking Section 66F of the IT Act, which deals with cyberterrorism;
Create a specialised cyber-forensics unit under NIA to probe deepfakes originating from foreign entities;
Enforce mandatory reporting of deepfake incidents by platforms to CERT-In under Section 70B(5) of the IT Act;
Launch training modules for police forces in collaboration with NICFS to help detect and investigate deepfake-related offences under Sections 419 and 500 of the IPC.
The petitioner also proposed the Ministry of Education initiate a National Deepfake Literacy Mission as part of the Samagra Shiksha Abhiyan. This would integrate AI and digital literacy education into the NCERT curriculum from classes VI to XII within a year, funded under the right to education guaranteed by Article 21A.
Finally, Advocate Goswami sought judicial declarations that the inaction of authorities violated:
The right to privacy and dignity under Article 21,
Freedom of speech and the right to receive truthful information under Article 19(1)(a), and
Right to equality under Article 14.
He further argued that the existing IT Act lacks sufficient safeguards against AI-driven threats, warranting judicial intervention akin to the Vishaka Guidelines issued by the Supreme Court in 1997 to address workplace harassment in the absence of specific legislation.
To address these wide-ranging concerns, the petitioner urged the Court to establish a five-member expert committee chaired by a retired Supreme Court judge and comprising senior officials from CERT-In, the ECI, NIA, and IIT Delhi, with a mandate to draft a model AI regulation law.
Case Title: Narendra Kumar Goswami v. Union of India & Ors., W.P.(C) No. 300/2025.