top of page

India's New Rules for Online Content

Updated: Aug 13

In the sprawling, cacophonous arena of Indian digital discourse, a battle of profound constitutional significance is raging. It is a conflict that pits the sovereign’s asserted need to maintain order against the foundational democratic right to freedom of expression. The latest and most consequential flashpoint in this ongoing struggle involves the social media giant X (formerly Twitter) and the Government of India.


In mid-2025, tensions, which had been simmering for years, boiled over when the government reportedly ordered X to block over 2,300 accounts, including those of prominent international news agencies. This move, and X’s subsequent legal challenge filed in March 2025, has thrust India’s increasingly muscular content-takedown regime into the global spotlight. The issue is no longer a niche topic for policy experts; it is a trending hashtag, a subject of fierce parliamentary debate, and a critical test of the resilience of Indian democracy in the digital age.


This editorial argues that the escalating use of opaque and procedurally deficient takedown mechanisms, particularly through a new centralized portal, represents a dangerous expansion of executive power that threatens to erode the fundamental right to free speech. X’s legal battle is, therefore, not merely a corporation’s dispute with a regulator, but a crucial test case that will determine the future of online liberty and dissent in the world’s largest democracy. To understand the stakes, one must first demystify the legal architecture governing online speech in India, examine the government’s recent manoeuvres that have triggered this confrontation, and weigh the delicate constitutional balance between public order and free expression. 

 

The legal bedrock for the Indian government's power to block online content is Section 69A of the Information Technology Act, 2000 (IT Act). Enacted with the stated intent of protecting national interests, this provision empowers the Central Government to issue directions for blocking public access to any information on a computer resource. The grounds for such blocking are specific and weighty: "in the interest of sovereignty and integrity of India, defence of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence." 

 

When the Supreme Court of India examined the constitutionality of this section in the landmark 2015 case, Shreya Singhal v. Union of India, it chose to uphold it precisely because it was not a tool for arbitrary censorship. The Court found that Section 69A was a "narrowly drawn provision with several safeguards." These safeguards are enshrined in the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009, commonly known as the "Blocking Rules." These rules mandate a formal, structured process: a designated officer must forward a request to a committee of senior inter-ministerial government officials for review. The rules require providing an opportunity for a pre-decisional hearing to the originator of the content and the intermediary platform. Crucially, any blocking order must be a reasoned, written order, and its contents are to be kept confidential to prevent the spread of the very information being blocked. This framework, at least on paper, was designed to be a scalpel, not a sledgehammer. 

 

However, the landscape has shifted dramatically. Beginning in 2023 and accelerating through 2024, the government embarked on a strategy that, according to critics and X’s legal filings, effectively bypasses the stringent safeguards of Section 69A. The Ministry of Electronics and Information Technology (MeitY) reportedly issued directives that expanded takedown authority to thousands of federal and state officials. The lynchpin of this new strategy is a centralized platform launched by the Ministry of Home Affairs (MHA) in October 2024: the "Sahyog" portal. While its name means "collaboration," X has labelled it a "censorship portal." 

 

The "Sahyog" portal operationalizes a different section of the IT Act—Section 79(3)(b). Section 79 provides a "safe harbour" to intermediaries like X, protecting them from liability for user-generated content. However, Section 79(3)(b) states that this immunity is lost if, upon receiving "actual knowledge" from the "appropriate Government or its agency" that information is being used to commit an unlawful act, the intermediary fails to "expeditiously remove or disable access" to that material. X’s lawsuit argues that the government is now using this provision, via the Sahyog portal, as a direct censorship tool.


Instead of routing requests through the formal, review-based Section 69A process, thousands of officials from various agencies, including the Indian Cybercrime Coordination Centre (I4C), can now directly issue takedown notices through Sahyog, citing Section 79(3)(b). This transforms a provision meant to deal with clearly unlawful content into a bulk censorship mechanism, devoid of the procedural checks and balances—like a review committee, reasoned orders, or pre-decisional hearings—that made Section 69A constitutionally palatable to the Supreme Court. 

 

The practical implications of this new regime became starkly clear in July 2025. Reports emerged that the government had ordered X, under Section 69A, to block access within India to over 2,300 accounts. The controversy exploded when it was revealed that this list included the feeds of prominent international news agencies such as Reuters, Turkey’s TRT World, and China’s Global Times. The move sparked immediate public outcry and accusations of blatant press censorship. A confusing back-and-forth ensued, with the IT Ministry denying it had issued any "fresh orders" to block news agencies, while X maintained it was compelled to act on a legal demand that required immediate compliance. 

 

This incident was not isolated. The government’s takedown demands have reportedly targeted a wide spectrum of content. While these lists include genuinely harmful material related to misinformation, financial hoaxes, and child sexual abuse, they have also allegedly swept up content that is squarely in the realm of legitimate political speech. Examples cited by digital rights activists and in legal challenges include accounts critical of government policies, news reports on matters of public interest like stampedes at railway stations, satirical political cartoons mocking officials, and even fabricated images of VIPs that, while false, could be considered political commentary. The sheer breadth and opacity of these orders have fueled concerns that the primary objective is not just public safety but also the suppression of dissent and the management of the government's public image. 

 

In response to this escalating pressure, X filed a writ petition in the Karnataka High Court in March 2025. This was not its first legal tussle with the Indian government, but it was its most direct and fundamental challenge to the state's censorship powers. X is not merely contesting individual blocking orders; it is challenging the legality of the entire expanded censorship mechanism. The lawsuit directly targets the legal basis of the Sahyog portal and the 2023 directives that delegated takedown authority so broadly. 

 

X's core arguments are rooted in constitutional law. The company contends that the new mechanism, by circumventing the safeguards of Section 69A, is an illegal and unconstitutional infringement on the free speech rights of both the platform and its users. It alleges a pattern of abuse of authority designed to silence dissent and critical voices, noting that many takedown demands are issued without justification and demand immediate action, making meaningful review impossible. For platforms like X, the stakes are existential. Failure to comply with these orders, however arbitrary they may seem, could lead to the loss of their "safe harbour" immunity under Section 79. This would expose them to a flood of potential criminal and civil liability for the billions of posts shared by users on their platforms, making it untenable to operate in India. 

 

At the heart of this conflict lies a classic democratic dilemma: how to balance the state's duty to maintain public order with the citizens' right to freedom of expression. The Indian Constitution provides a sophisticated, if contentious, framework for this balance. Article 19(1)(a) guarantees to all citizens the fundamental right to freedom of speech and expression, which the Supreme Court has interpreted to include the freedom of the press and the right to receive and impart information. 

 

However, this right is not absolute. Article 19(2) allows the state to impose "reasonable restrictions" on this freedom in the interests of a number of objectives, including the "sovereignty and integrity of India," "security of the State," and, most frequently invoked in this context, "public order." The judiciary's role has been to interpret what constitutes a "reasonable restriction" and to define the scope of "public order." In the seminal case of Dr. Ram Manohar Lohia v. State of Bihar (1966), the Supreme Court made a crucial distinction between "law and order" and "public order." The Court held that all breaches of public order are breaches of law and order, but not all breaches of law and order are breaches of public order. For speech to be restricted in the interest of public order, it must have a tendency to cause serious disturbances of public tranquility, not just minor infractions. This established a "proximity test": the link between the speech and the anticipated public disorder must be direct and proximate, not "remote, fanciful or far-fetched." This test was further sharpened in S. Rangarajan v. P. Jagjivan Ram (1989), where the Court declared the connection must be as close as that of a "spark in a powder keg." 

 

The government's argument for its expansive takedown powers rests on the necessity of acting swiftly to combat a deluge of unlawful and harmful content. Officials argue that in a country as diverse and socially volatile as India, misinformation, hate speech, and incitement can rapidly escalate into real-world violence. They point to the need to tackle organized disinformation campaigns, cybercrime, and the proliferation of child sexual abuse material as justification for a more agile and broad-based response mechanism. The government defends its approach by noting that other major tech companies are largely compliant, suggesting that its actions are both necessary and reasonable to ensure platform accountability and maintain national security. 

 

Conversely, free speech advocates, civil society organizations, and platforms like X argue that the current system is dangerously overbroad and ripe for abuse. They contend that the lack of transparency, the absence of reasoned orders, and the unavailability of a meaningful appeal process within the Sahyog portal create a chilling effect on all online discourse. Journalists, activists, and ordinary citizens may self-censor out of fear that their critical speech could be flagged by an anonymous official, leading to their account being blocked without recourse. This executive overreach, where the government acts as the complainant, prosecutor, and judge, is seen as a direct assault on the principles of natural justice and due process. The core of their argument is that the government is interpreting "public order" so broadly that it encompasses any criticism or reporting that it finds inconvenient, thereby weaponizing the exception to swallow the rule of free expression. 

 

For social media platforms and their compliance officers, the current environment presents a minefield of legal and ethical challenges. The operational burden has increased exponentially. Content moderation and legal teams are inundated with thousands of takedown orders emanating from a multitude of new sources, each demanding immediate action. They face the difficult task of navigating opaque orders that often lack specific legal justification, forcing them to choose between complying with a potentially unlawful demand or risking the loss of legal immunity and facing potential criminal proceedings against their in-country employees. This situation necessitates the development of robust internal review processes, constant engagement with legal experts, and a high-risk calculus on when to comply and when to challenge. 

 

Placing India's approach in a global context reveals a complex picture. Many democracies are grappling with how to regulate harmful online content. The European Union’s Digital Services Act (DSA), for example, also imposes stringent obligations on platforms to tackle illegal content. However, it differs from India’s current model in crucial ways. The DSA emphasizes transparency, requiring platforms to provide clear statements of reasons for content removal and establishing robust, independent out-of-court dispute settlement bodies. It places a heavy focus on systemic risk assessments for Very Large Online Platforms (VLOPs) and is underpinned by the oversight of both national regulators and the European Commission, with ultimate recourse to the Court of Justice of the European Union. Similarly, the UK's Online Safety Act, while controversial, involves an independent regulator, Ofcom, to enforce its provisions. In contrast, the US continues to debate the future of Section 230 of the Communications Decency Act, which provides broad immunity to platforms. While the global trend is towards greater platform accountability, India's model appears to be diverging from the democratic norm by concentrating power within the executive branch, with minimal transparency and judicial oversight. This approach risks being perceived not as a robust regulatory framework, but as a state-controlled censorship apparatus. 

 

The confrontation between the Indian government and X encapsulates a fundamental tension at the heart of modern governance: the undeniable need to curb the harms propagated by digital technologies versus the imperative to safeguard the democratic freedoms that these same technologies can foster. The government's duty to protect its citizens from online threats is legitimate and vital. However, the method it employs to achieve this end cannot be one that hollows out the very constitutional rights it is sworn to protect. The current takedown regime, characterized by its opacity and lack of due process, risks creating an environment where the state's power to censor is virtually limitless. 

 

X’s legal challenge is therefore a watershed moment. It is more than a corporate lawsuit; it is a critical defense of the procedural safeguards that prevent executive power from becoming absolute. The outcome of this case will set a lasting precedent for the future of online freedom, platform liability, and government oversight in India. It compels us, as citizens of a digital world, to think critically about the path forward. What is the appropriate scope of executive power in moderating our online conversations? What mechanisms for transparency and recourse must exist to protect users and platforms from arbitrary censorship? And ultimately, what long-term impact will our present choices have on the health of India’s democratic discourse and its ambition to be a leading digital economy? The future of free speech in India may well be written in the court's judgment, but its fate will be determined by the vigilance of its citizens in demanding a system that is both safe and free. 

Comments


BharatLaw.AI is revolutionising the way lawyers research cases. We have built a fantastic platform that can help you save up to 90% of your time in your research. Signup is free, and we have a free forever plan that you can use to organise your research. Give it a try.

bottom of page