India’s Tight New Deepfake Rules: Social Platforms Now Must Act Within Hours

India has moved to sharply limit the spread of harmful AI-generated content, mandating that major social platforms remove deepfakes and other sensitive synthetic media within an extremely tight timeframe sometimes as short as two hours.

The Ministry of Electronics and Information Technology (MeitY) on February 10 notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which will take effect from February 20, 2026. The updated regulations introduce a two-hour takedown requirement for certain categories of deepfake and non-consensual imagery, and a three-hour window for other unlawful material once platforms receive an official order or complaint.

The move is designed to curb the rapid spread of deceptive AI-generated content from fake celebrity videos to manipulated political messaging that can circulate widely before current removal mechanisms take effect. Platforms including Meta’s Facebook and Instagram, Google’s YouTube, X (formerly Twitter), Telegram and others will now face tighter deadlines for compliance, with significant legal consequences for delays.

Also read: Britain Deepfake Detection System to Be Developed in Partnership With Microsoft

Hard Deadlines Aim to Curb Rapid Harm

Under the amended rules, platforms must:

  • Take down non-consensual intimate imagery and deepfakes within two hours of receiving a valid complaint.
  • Remove other unlawful synthetic or AI-generated content within three hours of being notified by a court or competent authority.

These represent dramatic reductions from previous timelines deepfake and sensitive content once had a 24-hour removal window, while general takedown orders could take up to 36 hours.

The rules also require platforms to prominently label AI-generated and synthetically modified media, embed metadata for traceability, and deploy automated tools to detect and prevent unlawful content from being shared in the first place.

Definitions, Compliance and Accountability

“Synthetically generated information” now has a statutory definition, covering audio, visual or audiovisual material created or altered via AI to appear authentic. Routine edits or accessibility enhancements are excluded, but deepfakes and manipulated media that misrepresent real individuals or facts fall squarely within the new regime.

Platforms must also seek disclosures from users whether content was AI-generated. Where no disclosure is received, the platform is obliged either to label the material as synthetic or, in cases of deepfakes, remove it under the accelerated timelines.

If companies fail to act within the prescribed windows, they risk losing safe-harbor protections under Indian law exposing them to greater legal liability akin to traditional publishers rather than intermediary hosts.

Industry and Policy Reaction

Experts say the rapid compliance requirement marks a significant shift in how digital platforms will need to govern AI-driven content. While aiming to reduce harm, the compressed timelines have raised concerns among some legal and policy analysts that platforms may resort to over-removal or excessive automation, potentially impacting lawful speech.

Smaller platforms and startups, in particular, are expected to face challenges implementing detection and moderation systems capable of meeting the new deadlines a move that could reshape the global approach to AI content regulation given India’s vast internet user base.