top of page

Regulation of Deepfake Bill Introduced to Criminalise Non-Consensual AI-Generated Content

During the Winter Session of the Parliament of India in December 2025, a Private Member’s Bill aimed at tackling the misuse of artificial intelligence quietly entered legislative records. The Regulation of Deepfake Bill, 2024, was introduced by Shrikant Shinde, drawing attention to the growing misuse of AI tools that can fabricate hyper-realistic images, videos, and audio of individuals without their consent.

The Bill seeks to criminalise the creation and circulation of non-consensual deepfake content, prescribe punishment of up to three years’ imprisonment, and empower authorities to mandate watermarking of AI-generated media. It also creates specific offences for extortion carried out through fabricated digital content.

In introducing the Bill, the sponsor highlighted the rapid rise of deepfake incidents and their impact on privacy, dignity, and personal security. The proposal reflects increasing concern within legislative circles that existing laws are ill-equipped to respond to the scale and speed of harm caused by generative AI technologies.

Understanding what the Regulation of Deepfake Bill covers

The Regulation of Deepfake Bill is narrowly framed to address a specific form of digital harm. It focuses on content generated or manipulated using artificial intelligence or machine learning techniques to falsely represent a real person.

Deepfakes, in this context, refer to synthetic or altered audio-visual material that convincingly depicts individuals saying or doing things they never did. Such content, when created or shared without consent, has increasingly been used for harassment, impersonation, blackmail, and fraud.

Rather than treating deepfakes as a general cyber offence, the Bill recognises them as a distinct category requiring targeted legal intervention. The Regulation of Deepfake Bill places individual consent at the centre of its framework, drawing a clear line between authorised creative or commercial uses of AI-generated content and deceptive representations imposed on individuals without their knowledge.

Criminal penalties for non-consensual deepfake creation and distribution

A central provision of the Regulation of Deepfake Bill is the introduction of criminal liability for the creation or dissemination of deepfakes without consent.

Under the proposed law, any person who creates, publishes, or circulates a deepfake image, video, or audio of another individual without explicit permission may face imprisonment of up to three years, along with financial penalties. The offence applies regardless of whether the content is shared for profit or personal motives.

By explicitly criminalising non-consensual deepfakes, the Bill moves beyond existing remedies under defamation law, obscenity provisions, or identity theft clauses. These existing legal tools often address the consequences of harmful content rather than the deceptive act of synthetic fabrication itself.

The Regulation of Deepfake Bill seeks to bridge this gap by recognising the act of creating a false digital likeness as a standalone harm, even before reputational or financial damage can be quantified.

Addressing extortion and coercion through fabricated content

Another significant feature of the Regulation of Deepfake Bill is its treatment of extortion involving deepfakes. The Bill creates a specific offence where fabricated digital content is used to threaten or coerce individuals.

This includes situations where victims are blackmailed with threats of public release of deepfake content, often of a sexual or compromising nature, unless money or other benefits are provided. Such incidents have increasingly been reported, with victims facing immediate psychological distress and social stigma.

By recognising deepfake-based extortion as a distinct offence, the Regulation of Deepfake Bill acknowledges the real-world patterns of abuse associated with synthetic media. The provision signals that such conduct will be treated as serious criminal wrongdoing rather than a peripheral cyber offence.

Watermarking AI-generated content as a regulatory safeguard

Beyond criminal sanctions, the Regulation of Deepfake Bill introduces a preventive regulatory measure by empowering authorities to require watermarking of AI-generated digital content.

Watermarking involves embedding identifiable markers within digital media to indicate that it has been generated or altered using artificial intelligence. Under the Bill, designated authorities may direct platforms or developers to implement such measures.

This provision reflects a broader shift toward structural safeguards in digital governance. Instead of relying solely on post-harm prosecution, the Regulation of Deepfake Bill envisages a system where AI-generated content can be more easily identified and traced.

Watermarking is increasingly discussed globally as a tool to combat misinformation, impersonation, and deceptive content. The Bill brings this concept into India’s legislative discourse for the first time in a focused manner.

Why existing legal frameworks are seen as inadequate

The introduction of the Regulation of Deepfake Bill underscores growing concern that existing laws do not adequately address harms caused by generative AI.

Current provisions under information technology and criminal law were drafted at a time when synthetic media capable of near-perfect impersonation was not widely accessible. Deepfakes can now be produced quickly, at scale, and with minimal technical expertise.

Such content often spreads rapidly across digital platforms before takedown mechanisms can respond, leaving victims exposed during the most critical period. The Regulation of Deepfake Bill reflects the view that addressing these challenges requires legal provisions designed specifically for AI-generated deception.

Privacy and dignity in the age of artificial intelligence

At its core, the Regulation of Deepfake Bill is framed as a response to emerging privacy and dignity concerns in the digital age. Deepfakes represent a form of harm that goes beyond financial loss or data misuse.

For many victims, the damage lies in the loss of control over one’s identity and public image. Fabricated content can undermine personal autonomy, cause emotional distress, and have long-lasting social consequences even after it is proven false.

By centring consent and accountability, the Regulation of Deepfake Bill places deepfakes within a broader conversation on digital rights and individual protection as AI technologies become more pervasive.

Significance of a Private Member’s Bill in this context

While Private Member’s Bills rarely become law, their introduction often serves as an important indicator of emerging policy priorities. The Regulation of Deepfake Bill brings the issue of synthetic media regulation into formal parliamentary debate.

The Bill reflects growing recognition that generative AI technologies require sector-specific legal responses rather than reliance on general cyber laws. It also signals that concerns around AI misuse are no longer confined to advisory discussions or platform guidelines.

The proposal has been reported and analysed by legal news platforms such as Bar and Bench, highlighting its relevance within India’s evolving legislative and policy landscape.

A step toward future AI governance discussions

The Regulation of Deepfake Bill arrives at a time when artificial intelligence is rapidly transforming digital communication, creativity, and information dissemination. While limited in scope, the Bill represents one of the earliest legislative attempts in India to grapple directly with AI-generated deception.

Its focus on consent, criminal accountability, and technical safeguards such as watermarking may influence future discussions on broader AI regulation and digital governance frameworks.

Even if the Bill does not progress beyond introduction, it places deepfake regulation firmly on the parliamentary agenda and reflects growing awareness of the risks posed by unchecked generative AI.

Conclusion

The Regulation of Deepfake Bill marks a notable moment in India’s legislative engagement with artificial intelligence. By proposing criminal penalties for non-consensual deepfakes, recognising extortion through fabricated content, and introducing watermarking as a preventive tool, the Bill acknowledges the complex harms associated with synthetic media.

As deepfake technology becomes increasingly accessible and sophisticated, the concerns underlying the Regulation of Deepfake Bill are likely to intensify. Whether through this proposal or future legislative initiatives, the regulation of deepfakes has clearly entered India’s policy conversation.

Comments


BharatLaw.AI is revolutionising the way lawyers research cases. We have built a fantastic platform that can help you save up to 90% of your time in your research. Signup is free, and we have a free forever plan that you can use to organise your research. Give it a try.

bottom of page