Supreme Court Calls for Autonomous Regulator of Digital Media Oversight Amid Self-Regulation Failures
- Chintan Shah

- 13 hours ago
- 5 min read
The Supreme Court of India recently signaled a potential tectonic shift in the governance of online content, observing that the current system of self-regulation by digital media platforms is “ineffective” in curbing harmful and perverse content. The apex court expressed a clear inclination toward establishing a “neutral, independent and autonomous body” to oversee the vast and rapidly expanding landscape of online news and user-generated content.
The significant observations were made by a Bench comprising Chief Justice of India Surya Kant and Justice Joymalya Bagchi, during the hearing of petitions filed by content creators challenging First Information Reports (FIRs) against them related to alleged obscene content. While the immediate issue before the Court involved specific content creators, the proceedings expanded rapidly to address the broader regulatory lacunae plaguing the Indian digital ecosystem.
Context of Concern: The Rapid Virality Problem
The Supreme Court’s apprehension stems from the inherent nature of social media and online platforms, where content—whether defamatory, obscene, or misleading—can achieve virality in seconds, causing irreversible damage before any remedial action can be taken.
During the hearing, the Solicitor General of India, Tushar Mehta, informed the Bench that the Union government was in the process of proposing new guidelines. However, the Chief Justice expressed shock at the existing lack of accountability for individual content creators, particularly those operating personal channels on platforms like YouTube.
The Bench observed:
“So I create my own channel, I am not accountable to anyone… somebody has to be accountable!”
The Solicitor General added that the issue extended beyond obscenity to include “perversity” in user-generated content (UGC), noting that "Anyone can create a YouTube channel, say anything under the garb of free speech, and the law is helpless. That cannot continue."
This exchange highlights the judiciary’s focus on accountability and the fundamental challenge posed by user-generated content, which often bypasses the traditional editorial checks applied to print or broadcast media.
The Judicial Diagnosis: Failure of Self-Regulation
The core of the Supreme Court's critique was aimed squarely at the prevailing model of self-regulation adopted by digital platforms, including Over-The-Top (OTT) streaming services and news portals, often housed under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021).
Counsel for industry bodies, such as the Indian Broadcast and Digital Foundation, argued that a grievance redressal mechanism and voluntary content categorisation were already in place. However, the Court countered this by questioning the recurring instances of offensive content.
The Bench observed that “self-styled” bodies would not be effective because they are susceptible to the influence of vested interests—those who profit from the exploitation of digital content—or, conversely, undue state pressure.
The Chief Justice stated that the regulatory body must be:
Neutral: Not biased towards creators, platforms, or the government.
Independent and Autonomous: Free from external interference.
This recommendation for an independent, neutral entity signals judicial acknowledgment that the current co-regulatory framework, while intended to promote ethics, has not been robust enough to safeguard public morality and prevent widespread harm.
Seeking Effective and Preventive Mechanisms
The Court laid stress on the need for “preventive mechanisms” rather than relying solely on post-publication penalties. The judges noted that current legal recourse—such as filing criminal cases or seeking damages after the content has gone viral and caused harm—amounts to a “post-occurrence penalty” which is often too late to prevent reputational or social damage.
Justice Bagchi observed that the primary difficulty is the limited response time available to intermediaries. “A takedown takes at least 24 hours. By the time it is effectuated, the harm is already done. Social media is mercurial, goes across borders and is transmitted in seconds.”
In addressing how preventive measures could be implemented without infringing upon the cherished right to free speech, the Bench offered specific suggestions, including the potential use of technology:
Age Verification: The Chief Justice of India suggested the use of identity documents, such as Aadhaar numbers, to reliably verify the age of users accessing adult or sensitive content online. This would move beyond the current ineffective warnings, which are often dismissed by minors.
Technological Sieving: The Court urged the government to explore how Artificial Intelligence (AI) could be leveraged to “curate content” and serve as an "effective mechanism" to scrutinise and sieve user-generated material prior to uploading, ensuring it does not violate decency or national interest standards.
The Bench was careful to clarify that the intent was not to establish a system of pre-censorship or to throttle legitimate dissent. The Court reassured stakeholders that it would not put its “seal of approval on something which can gag somebody” or stifle criticism of the government, which is a precious democratic right. The goal is solely to address the regulatory vacuum that allows pernicious content to spread unchecked.
The Proposed Government Framework
In response to the Court's repeated concerns, the Ministry of Information and Broadcasting (I&B) submitted a note detailing its plan to amend the Code of Ethics within the IT Rules, 2021. This proposed framework aims to formalise and strengthen the regulation of digital content by including new stipulations:
Defining Obscenity: Introducing a clearer definition for “obscene digital content” based on the criteria of lascivious character, prurient interest, and potential to “deprave or corrupt” the audience—concepts historically used in Indian censorship law (like the Aveek Sarkar judgment).
Content Rating: Classifying online curated content into age-appropriate rating categories (e.g., U, U/A 7+, 13+, 16+, A).
New Regulatory Areas: Incorporating guidelines specifically addressing modern challenges like AI-generated content and deepfakes.
Bar on Harmful Content: Explicitly banning content that offends decency, promotes communal attitudes, is defamatory, false, or deemed "anti-national."
The Supreme Court directed the government to take the draft guidelines and put them into the public domain to invite suggestions and objections from the general public and stakeholders, aligning with the principles of pre-legislative consultation. The Court intends to examine the issue further after the government completes this public consultation process.
Conclusion: Strengthening Internet Governance
The Supreme Court's strong advocacy for an independent, autonomous regulator for online media marks a significant moment in India's regulatory journey in the digital age. It reflects the judiciary’s deep concern that while the digital revolution has empowered free speech, the absence of robust, neutral oversight has simultaneously created a space for unchecked abuse, obscenity, and misinformation that often targets the vulnerable.
By questioning the efficacy of self-regulation and demanding a mechanism that is both preventive and accountable, the Court is pushing the Executive to create a framework that can effectively balance the fundamental right to free expression under Article 19(1)(a) with the reasonable restrictions necessary to protect public decency, morality, and social order under Article 19(2). The final shape of the new regulatory body, whether statutory or quasi-judicial, will define the future of internet governance and accountability for content creators in India.



Comments