India Deepfake Regulation: MeitY Proposes Draft IT Rules to Mandate Labelling for AI-Generated Deepfakes
- Chintan Shah

- 1 day ago
- 6 min read
India’s First Step Toward AI Accountability
The Government of India has proposed the country’s first formal regulatory framework to address the fast-rising threat of deepfakes and synthetic media. On October 22, the Ministry of Electronics and Information Technology (MeitY) released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, introducing new obligations for intermediaries and content creators dealing with AI-generated or “synthetically produced” content.
The proposed amendments mark a pivotal step in India’s effort to govern artificial intelligence (AI)-driven misinformation and impersonation, mandating that any AI-generated images, videos, or text must carry a visible label and embedded metadata disclosing their synthetic nature and origin.
According to MeitY, these rules are designed to “ensure transparency in the use of generative technologies and prevent harm arising from misuse of AI tools for deception or misinformation.” The draft also seeks to establish a legal basis for penalizing individuals or platforms that alter or remove such disclosures, signaling a clear intent to place responsibility for content authenticity within the broader ecosystem of digital intermediaries.
The Rise of Deepfakes: Why Regulation Can’t Wait
In recent years, India—like the rest of the world—has witnessed a sharp increase in the misuse of AI-generated “deepfakes”: hyper-realistic synthetic media created using machine learning that can manipulate faces, voices, and contexts with alarming accuracy.
Deepfakes have been weaponized for:
Political misinformation, including fabricated speeches or doctored videos of public figures.
Gender-based violence, particularly non-consensual explicit imagery.
Financial and reputational fraud, where AI-generated impersonations mislead individuals and institutions.
While India’s existing laws under the Information Technology Act, 2000 and the Indian Penal Code offer general provisions against defamation, forgery, and obscenity, they do not directly address the creation or dissemination of synthetic content. The October 22 draft amendments aim to close that gap by establishing traceability and disclosure requirements specifically tailored to AI-generated content.
What the Draft Rules Propose
The proposed amendments introduce a new sub-rule within Rule 3(1)(b) and Rule 3(1)(d) of the IT Rules, which define the obligations of intermediaries in managing user-generated content.
Key highlights include:
Mandatory Labelling of AI Content:
All synthetically generated information—defined as content created wholly or partly by AI tools or algorithms—must bear a visible label or watermark stating it is “AI-generated.”
The label must remain visible across all forms of media distribution and must not be obscured or removed when reposted.
Embedded Metadata for Traceability:
AI-generated content must include embedded metadata containing details such as the platform or model of origin, the date of creation, and identifiers to trace the source.
Intermediaries must ensure metadata integrity is preserved and refrain from altering or stripping it.
Intermediary Obligations:
Platforms hosting such content—whether social media networks, video-sharing sites, or generative AI tools—must put in place technical measures to identify, tag, and retain such content.
They are required to act swiftly against users who distribute synthetic content without disclosure or who deliberately remove AI-origin labels.
Penalties for Misuse:
The draft provides the legal foundation for penalties under the IT Act against users or intermediaries who violate disclosure obligations, alter metadata, or generate content that deceives users into believing it is authentic.
Together, these provisions mark India’s first regulatory attempt to codify AI accountability, focusing on the “duty to disclose” as a central pillar of trustworthy AI governance.
“User Right to Know”: A New Principle in Digital Regulation
Perhaps the most striking feature of the draft is the introduction of a “user right to know”—the idea that individuals are entitled to know when they are viewing or interacting with AI-generated material.
This principle shifts the legal focus from content censorship to transparency and informed consent. Rather than prohibiting AI-generated material altogether, the government seeks to empower users with knowledge about the authenticity of the information they consume.
MeitY, in its accompanying note, stated:
“The objective is not to curb innovation in artificial intelligence but to ensure that AI applications operate within a framework of transparency, traceability, and accountability.”
This mirrors international trends, such as:
The EU’s Artificial Intelligence Act, which mandates disclosure when users interact with AI-generated media or chatbots.
The US Federal Communications Commission’s proposed guidelines requiring political ads using synthetic media to carry disclaimers.
By embedding this “right to know” principle into the IT Rules, India positions itself among the early adopters of disclosure-based AI regulation, especially in the developing world.
Balancing Innovation and Regulation
The government’s move comes at a critical juncture when India is seeking to balance AI innovation and governance. With the country rapidly emerging as a global AI hub—home to a growing ecosystem of generative AI startups, research labs, and enterprise applications—the concern is not just about misuse but also about over-regulation stifling growth.
To address these concerns, MeitY’s approach avoids outright prohibitions and instead adopts a risk-based, disclosure-driven framework. The draft rules stop short of imposing prior approval requirements for AI models or content generation tools. Instead, they emphasize:
Platform responsibility for detection and labelling.
Transparency to users rather than pre-emptive censorship.
Targeted penalties for willful deception or label manipulation.
Industry observers note that this model reflects a “light-touch regulation” philosophy similar to India’s Digital India Act (under consultation), which envisions a graded compliance framework rather than blanket restrictions.
Challenges Ahead: Implementation and Enforcement
Despite its intent, the draft raises practical questions about implementation and enforceability.
Technical Complexity:
Detecting AI-generated content and ensuring labels persist across formats (e.g., screenshots, re-uploads, or derivative edits) may require advanced watermarking and blockchain-like verification tools.
Smaller intermediaries may lack the resources to deploy such technologies effectively.
Jurisdictional Ambiguity:
Many AI tools and content-hosting platforms operate from outside India. Ensuring compliance with local labelling requirements could require cross-border cooperation or mutual enforcement agreements.
Risk of Overreach:
Civil liberties groups warn that overbroad definitions of “synthetic content” could encompass satire, artistic expression, or harmless use of generative tools, potentially chilling legitimate speech.
Data Privacy and Metadata Concerns:
Embedding traceable metadata may inadvertently expose user identities or platform data, raising questions under India’s Digital Personal Data Protection Act, 2023.
A senior Delhi-based technology lawyer commented that “while the principle of transparency is sound, implementation will require both technical standards and procedural safeguards to ensure proportionality.”
Towards a Comprehensive AI Regulatory Framework
The deepfake draft amendments are part of a broader national strategy to govern AI through layered regulation. MeitY has already established several working groups focused on AI ethics, risk classification, and regulatory design.
Earlier this year, the ministry signaled that the upcoming Digital India Act—set to replace the two-decade-old IT Act—would incorporate specific provisions addressing algorithmic accountability, AI safety, and content provenance.
The current draft amendments, therefore, serve as a transitional step, embedding immediate guardrails within the existing IT Rules while paving the way for a more comprehensive AI-specific law.
Public Consultation and Next Steps
The draft is open for public consultation until mid-November, with MeitY inviting comments from industry stakeholders, civil society, and legal experts. After review, the final rules will likely be notified under the IT Act, 2000, giving them statutory force.
Stakeholders across the digital ecosystem—especially AI startups, platforms, and rights organizations—are expected to engage actively in the consultation process, given the far-reaching implications of the rules for innovation, compliance, and free expression.
If enacted, India would join a growing list of jurisdictions, including the EU, China, and Singapore, that have implemented or proposed regulations mandating AI-content labelling and provenance tracking.
A Defining Moment for India’s Digital Future
The release of the deepfake regulations underscores a larger shift in India’s technology governance philosophy—from reactive enforcement to proactive rulemaking in the AI era.
By anchoring AI governance within the existing IT Rules, the government has opted for regulatory pragmatism over legislative delay—addressing immediate risks while signaling a longer-term vision for a structured AI policy regime.
Ultimately, the success of this framework will depend on execution and trust:
Execution, in the sense of robust, technically sound systems for detecting and labelling AI content; and
Trust, by ensuring the rules protect users without curbing innovation or legitimate expression.
In the digital age, where truth itself is being algorithmically rewritten, India’s proposed framework offers a simple but powerful corrective — the right to know what is real, and what is not.



Comments