Government Introduces Strict AI and Deepfake Regulations with 3-Hour Content Removal Mandate

The Ministry of Electronics and Information Technology has notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, introducing stringent regulations targeting Artificial Intelligence, deepfakes, and synthetically generated content.
The rules, which come into effect on February 20, 2026, dramatically tighten timelines for content removal and impose specific obligations on platforms enabling AI-generated content.
New Legal Definition: Synthetically Generated Information
The amendment introduces a legal definition for AI-created content:
What it covers: Audio, visual, or audio-visual information created or altered algorithmically that appears real or authentic—indistinguishable from reality.
Exemptions: Routine editing such as color correction and noise reduction, standard document creation (PDFs, presentations), and accessibility improvements are exempt, provided they do not materially distort meaning or create false documents.
Mandatory Labeling and Metadata Requirements
Platforms allowing users to create AI content face strict new obligations:
- Visible Labeling: All AI-generated content must carry prominent visible markers or audio prefixes identifying it as synthetic
- Permanent Metadata: Content must embed permanent, irremovable metadata with unique identifiers tracking the source of creation
- Automated Blocking: Platforms must deploy automated tools to prevent generation of CSAM, non-consensual intimate imagery, false documents, and impersonation content
Drastically Reduced Response Timelines
| Action Required | New Timeline | Previous Timeline |
|---|---|---|
| Removal upon government/court order | 3 hours | 36 hours |
| General grievance resolution | 7 days | 15 days |
| Non-consensual sexual content complaints | 2 hours | 24 hours |
User Declaration and Identity Disclosure
Significant Social Media Intermediaries must require users to declare if uploaded content is synthetically generated. Platforms must verify these declarations using technical measures.
Crucially, if users violate deepfake rules, platforms are authorized to disclose their identity to victims or complainants under applicable laws.
Mandatory User Warnings
Platforms must update terms of service every three months, explicitly warning that creating or sharing illegal AI content leads to immediate account termination and penalties under Bharatiya Nyaya Sanhita, POCSO Act, and other laws.
