top of page

India’s War on Deepfakes: How Far Should the State Go to Regulate AI?

On 22 October 2025, the Ministry of Electronics and Information Technology (MeitY) dropped a draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 — a move that may well re-shape how we understand “what’s real and what’s not” on the internet. The government is seeking to regulate what it terms “synthetically generated information” — that is, audio, video, images or other media created or modified by computer resources so as to appear authentic. The stated aim: to curb deepfakes, misinformation campaigns, impersonation fraud and other lies spun at the speed of algorithms. On the face of it, this is a welcome step: in an era when a convincing video of a public figure saying something they never said can go viral within hours, the stakes are high. But as with many technology-law intersections, the promise brings its own set of complications. If not carefully crafted, regulation meant to protect can end up constraining creativity, chilling free expression, and undermining privacy. In short, the challenge is to find a balance between safeguarding society and preserving freedoms. My thesis is this: while the draft amendment is timely and necessary, its design risks tilting the balance too far toward control, thus posing serious concerns for innovation, speech and privacy — unless amended sensibly, it may fix the arrow but sour the bow. 

To understand the significance, one must begin with context. The IT Rules, 2021 originally set the parameters for how intermediaries — social media platforms, web hosts, messaging services — must act to retain safe-harbour protections under the Information Technology Act, 2000. They laid down “due diligence” obligations: appoint grievance officers, respond to user complaints, act on court orders or governmental notices. What they did not demand was proactive screening of all content. That changed with the new draft. The amendment adds a new definition: “synthetically generated information” is defined as “information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true”. The draft further requires that any synthetic content that is publicly displayed must carry a visible or audible label, and any intermediary that enables creation or modification of such content must embed a permanent unique metadata or identifier covering at least ten per cent of the visual frame (or initial ten per cent in audio) to alert viewers to its nature. Moreover, platforms deemed “Significant Social Media Intermediaries” (SSMIs) must obtain user declarations about whether the upload is synthetic, and then deploy “reasonable and appropriate” technical tools to verify that declaration. The government frames this as essential to build a “Safe, Trusted and Accountable Internet” for India’s 700+ million net-users. 

Consider the reasons. In recent years we have seen manipulated videos of public figures, non-consensual intimate content created via deepfake apps, AI-generated voice fraud (e.g., impersonating someone’s voice to extract money), and social media posts that are algorithmically generated and deployed to sway public opinion. All of this falls under the broad umbrella of “synthetic media”. The explanatory note to the draft points out: “Such content can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.” (MeitY) So yes — the danger is real. A regulatory response is justified. 

Yet the devil is in the details. The draft raises key questions: how do we ensure the labelling requirement is practical and not stifling? How do we ensure that creative content, parodies, harmless filters and everyday editing don’t end up being over-regulated? How do we protect user privacy when metadata embedding is mandated? And how do we preserve the underlying principle of free expression, especially when intermediaries are being asked to assume a proactive role in verification and labelling? 

Let’s examine first the positives. For businesses, platforms and compliance officers the draft offers clarity: definition of synthetic content, obligation for labelling, metadata traceability, and verification pathways. This gives legal teams something on which to base policies and procedures. For end-users it promises a mark of authenticity: the visual or audible identifier should make it easier to spot “deepfake” or AI-generated content. For society at large, the hope is increased trust in what is seen or heard online — a key requirement as digital content becomes the dominant form of communication. 

Moreover, the amendment offers an important signal: the government recognises the growing threat of unchecked synthetic media and is trying to bring it within the regulatory ambit rather than simply relying on takedowns post-facto. That represents a shift in approach from reactive to preventive regulation. It is consistent with global trends: the EU Artificial Intelligence Act focuses on high-risk AI systems, and the US has various initiatives around AI accountability. India’s draft places the locus of responsibility on intermediaries and platforms. In a sense, India is catching up to the challenge. 

But now for the concerns. One major issue is scope. The definition is broad: “any information … artificially or algorithmically created … in a manner that … reasonably appears to be authentic or true”. Critics point out that this could capture everything from cartoon filters to augmented-reality effects, beauty edit apps, minor photo-touchups, or even AI writing assistance. As one commentary put it: “If a content creator uses ChatGPT to refine his caption for an Instagram post, he will be required to declare the same before it gets uploaded.” That means the burden may fall on large swathes of otherwise harmless content. The labeling requirement — requiring “no less than 10% of visual display” as identifier — may be disruptive to user experience. “What will a tiny corporate logo look like if it must take 10% of a LinkedIn profile picture?” asks MediaNama. At scale, one can imagine platforms becoming over-cautious, pre-emptively censoring or labeling huge amounts of user-generated content to avoid regulatory risk. That raises a chilling effect concern: the very free expression and creativity the internet promotes could suffer. 

Second, this changes intermediary liability in a significant way. Under the IT Act’s Section 79 regime, intermediaries were protected if they acted swiftly on actual knowledge (notification or court order). The well-known judgment Shreya Singhal v Union of India (2015) held that intermediaries could not be required to constantly monitor or proactively remove content. The draft amendment, however, flips the script: it essentially treats intermediaries as expected to verify, label, and monitor synthetic content proactively. As explained in commentary: “The duty to ‘verify’ user declarations in Rule 4(1A) effectively means requiring constant scanning of user uploads with still-imperfect tools.”  The implication: failure to detect an unlabelled deepfake could lead to loss of safe-harbour, giving platforms a strong incentive to over-monitor and over-remove. This is a structural change that merits serious scrutiny. 

Third, the metadata requirement poses privacy risks. Embedding “permanent unique metadata or identifier” in synthetic content introduces traceability, which may be beneficial for accountability but problematic when abused for surveillance. One analysis warns that “metadata may also consist of such items [that] may seem to be insignificant but may have severe privacy implications.”  In the Indian context, where privacy laws are still evolving and state authorities have broad exemptions, mandating embedded traceability may erode anonymity and hamper platforms that enable protected speech (whistle-blowers, protest voices, marginalized communities). Because once metadata is embedded at scale, one can trace origin, usage patterns and user profiles. The requirement must therefore be assessed against constitutional guarantees of privacy and free speech (Articles 19(1)(a), 21). 

Fourth, the regime’s implementation challenges are substantial. Detecting whether media is synthetic or not is a work-in-progress in computer science. False negatives (missed synthetic media) expose platforms to regulatory risk; false positives (flagging genuine media) expose users to suppression. The draft demands that SSMIs deploy “reasonable and appropriate technical measures” but gives little detail on standards, auditability, error rates or recourse for wrongful takedowns. The practical reality is that deepfakes and AI-generated content evolve fast; regulation must reflect that dynamic. The draft’s static 10% label requirement may rapidly become outdated. Additionally, smaller platforms or startups may struggle with cost and compliance burdens, raising concerns about innovation being stifled. 

Of course, one must ask: what about artistic expression, satire, parody? The draft leaves little clarity. A video made for humour using synthetic voices or a short filter-driven TikTok clip might fall under the label. The question: do we want a regime that forces every filter to carry a “synthetic” badge? Critics say this may blunt creativity, impair user-experience, and push platforms toward conservative content moderation. That is the trade-off: genuine creativity might get caught in compliance nets meant for deceptive deepfakes. A wise regulatory design would include safe-harbours, exemptions for satire, and low-burden modes for benign synthetic content — but the draft currently is rigid. 

Another viewpoint: proponents will argue that in an age of “liquid truth” (where video/image may no longer be trusted by default), the label is essential. The quote by former US Secretary of State Madeleine Albright — “We will live in a world where no photograph is reliable,” this is a paraphrasing but captures the spirit — reminds us that our default assumption of authenticity is under threat in the AI era. The requirement to label synthetic content may restore trust. For businesses and brand-owners, this means fewer reputational hazards from manipulated media. For consumers, it means better clarity. For law enforcement, it increases the traceability of harmful content. All valid aims. 

But to accomplish that we must carefully calibrate law so that it is not over-inclusive, does not stifle weaving of culture and technology, and remains technically practicable. The label requirement is an example of this calibration challenge: 10% visual frame may be too heavy for mobile-first formats or immersive AR/VR experiences. Maybe a more flexible standard tied to content type, platform or risk-level would be better. The same goes for verification tools: a “reasonable and appropriate” threshold is vague and places burden of proof on platforms. If detection tools fail (which they may), platforms could lose safe-harbour for reasons beyond their control. 

Another major consideration is privacy. Embedding traceable metadata into every synthetic upload may create a permanent digital fingerprint of users and their content creation habits. In a society where anonymity or pseudonymity occasionally matters (whistle-blowers, marginalized voices, political dissidents), the risk is not only regulatory over-reach but chilling of dissident or minority speech. In India, where the right to privacy was solidified in K. S. Puttaswamy (Retd.) v. Union of India, policy must pass the twin tests of necessity and proportionality. The editorial commentary warns: “Under Section 17 of the Digital Personal Data Protection Act, 2023, the government can exempt state instrumentalities … hence even if metadata is considered to be personal data … entities and state authorities can still be exempted from compliance with privacy safeguards.” That means forced metadata embedding could run head-first into privacy rights unless carefully circumscribed. 

Let me bring in a real-world illustration. Suppose a regional news platform allows users to submit short video clips. A user uses an AI filter to change their face into a famous personality in jest (a pure parody). Under the draft rules, the platform will need the user to declare “synthetic content” and embed the identifier covering 10% of the frame. If they fail, they risk regulatory exposure. The platform may thus decide to pre-moderate and refuse uploads unless they carry the label. The result: fewer user submissions, less engagement, potentially fewer voices from grassroots. Meanwhile, a deepfake video of a politician making inflammatory remarks still slips through if the filter isn’t triggered, causing reputational and social harm. The law risks chasing the wrong tail. 

What about comparative frameworks internationally? The EU AI Act focuses on “high risk” AI systems (e.g., biometric identification in public spaces) rather than labelling every synthetic image or video. In the US, though regulation is patchy, many proposals emphasise transparency: users must know when content is AI generated, but the obligations on platforms are less heavy. India’s draft is relatively bold: it places obligation on creation platforms, hosting intermediaries, metadata embedding, verification and labelling. That means Indian regulators expect more from intermediaries than many of their global counterparts. That can be a strength — India choosing the regulatory high-road — but also a risk: burdening platforms beginning to scale, stifling innovation, and creating compliance costs disproportionate to risk. The editorial lens here suggests that India might benefit from a tiered regime: lower controls for benign synthetic content, stronger controls for high-harm use-cases (political deepfakes, impersonation fraud). That way regulation remains proportionate. 

For business owners and legal advisors, the immediate takeaway is: act now. Even though the draft still invites comments and is not yet finalised, the structure is clear enough to begin compliance planning. Review internal workflows, system architecture, upload policies, metadata strategies, labelling methods, user-declarations, verification tools, and audit trails. Educate your teams about what constitutes synthetic content. If your service allows users to generate or modify content (for example, a photo-editing app, filter service, AI writing tool, voice-modulation app, game engine with avatars), you are in scope. Work with vendors who provide AI-detection or watermarking capabilities. Assess how your UI can conspicuously display the identifier. And monitor parliamentary or regulatory developments; since rule-making around dynamic technology evolves quickly. 

Meanwhile, the conversation for law schools and legal research is thick: how do we treat synthetic media under Article 19(1)(a)? If the law forces labelling, does that constitute compelled speech (i.e., saying “this is synthetic”)? Could the courts treat labelling requirements as an encroachment on expressive freedom? How does the safe-harbour architecture evolve? Are platforms passive conduits or active monitors? What is the right balance of innovation, trust and regulation in an AI-driven information ecosystem? These questions matter to our jurisprudential future. 

In concluding, the draft amendment to regulate synthetic AI-content is a signal that India takes seriously the disruptive power of generative AI. The promise of building a safer, more trusted digital society is laudable. But if regulation is heavy-handed, all mouths may not be free to speak, creativity may be shunted aside, and platforms may curtail uploads to avoid liability. The ten-per-cent label requirement, the broad definition of synthetic content, the verification and metadata mandates: each is well intended but may tilt the regulatory balance away from freedom toward control. My recommendation: refine the draft so that it is proportionate, technology-sensitive, innovation-friendly and rights-respecting. Carve-out for parody and user-generated creativity, adopt a risk-based labelling regime (lighter for harmless filters; heavier for high-harm deepfakes), provide safe-harbour clarity for startups, and ensure metadata embedding is limited to what is necessary and subject to oversight. As one prominent digital rights scholar put it, “The duty to ‘verify’ user declarations … effectively means requiring constant scanning … a step beyond the logic of Shreya Singhal.”  If we get this right, we may see a regulatory model that fosters trust without throttling speech or innovation. If we get it wrong, we may inadvertently build a digital society where authenticity matters less than compliance, and suspicion replaces spontaneity. The choice is ours. 

Comments


BharatLaw.AI is revolutionising the way lawyers research cases. We have built a fantastic platform that can help you save up to 90% of your time in your research. Signup is free, and we have a free forever plan that you can use to organise your research. Give it a try.

bottom of page