Checklist for Compliance with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026

To ensure compliance with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 by the February 20, 2026 deadline, digital platforms could utilise the following checklist.

I. Content Moderation & Takedown Timelines

Platforms must establish 24/7 rapid response teams to meet drastically shortened legal windows.

  • 3-Hour Takedown: Remove or disable access to content within 3 hours of receiving a court order or an order from an authorized government officer.
  • 2-Hour Emergency Removal: Act within 2 hours for sensitive violations, specifically non-consensual deepfake nudity or intimate imagery.
  • 7-Day Grievance Acknowledgement: Acknowledge user complaints within 7 days, reduced from the previous 15-day requirement.
  • 36-Hour Grievance Resolution: Resolve specific urgent grievances (like identity theft or privacy violations) within 36 hours.

II. Synthetically Generated Information (SGI) Management

If your platform enables the creation, modification, or hosting of AI-generated content, you must implement these technical safeguards:

  • Automated Prohibitions: Deploy tools to actively block the creation or dissemination of SGI involving child sexual abuse material (CSAM), non-consensual intimate imagery, false electronic records, or instructions for explosives/arms.
  • Mandatory Labelling:
    • Visuals: Apply clear, prominent labels on all synthetic images and videos.
    • Audio: Include a prominently prefixed audio disclosure for synthetic sound.
  • Metadata & Traceability: Embed permanent metadata and a unique identifier into every SGI file to trace its origin to your computer resource.
  • Anti-Tampering Measures: Ensure that labels and metadata cannot be removed, modified, or suppressed by users.

III. Additional Duties for Significant Social Media Intermediaries (SSMIs)

Large platforms (over 5 million users) must update their upload workflows:

  • User Declaration: Integrate a mandatory prompt requiring users to declare if their content is synthetically generated before uploading.
  • Technical Verification: Deploy automated tools to verify the accuracy of user declarations, ensuring content isn’t published without proper labels.
  • Safe Harbour Maintenance: Note that “willful blindness” or failure to act on known SGI can lead to the loss of Section 79 Safe Harbour protection.

IV. Legal & User Awareness Updates

  • Quarterly Notifications: Inform users at least once every three months about your privacy policy and the consequences of violating rules (e.g., account termination).
  • Language Requirements: Provide these notifications in English or any language specified in the Eighth Schedule of the Indian Constitution.
  • Updated Legal References: Ensure all terms of service and internal manuals replace references to the “Indian Penal Code” with the “Bharatiya Nyaya Sanhita (BNS), 2023”.
  • Mandatory Reporting: Establish protocols to report SGI-related offences (especially POCSO violations) to the appropriate authorities.

Authored By
Vijay Pal Dalmia, Advocate
Supreme Court of India & Delhi High Court

Email id: [email protected]
Mobile No.: +91 9810081079

Linkedin: https://www.linkedin.com/in/vpdalmia/
Facebook: https://www.facebook.com/vpdalmia
X (Twitter): @vpdalmia

Regulation of AI-Generated/Deepfake Content and Synthetically Generated Information (SGI) In India – New Rules

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“2026 Amendment Rules”) mark a significant regulatory shift in India’s digital governance framework. Notified on 10 February 2026 and effective from 20 February 2026, these amendments primarily address the regulation of synthetically generated information (SGI)—commonly referred to as AI-generated or deepfake content.

The amendments strengthen due diligence obligations for intermediaries, especially significant social media intermediaries (SSMIs), and introduce strict timelines for compliance.

The 2026 amendments aim to:

  • Regulate AI-generated/deepfake content.
  • Prevent misuse of synthetic media for fraud, impersonation, obscenity, misinformation, and criminal activity.
  • Mandate labelling and traceability of synthetic content.
  • Tighten intermediary compliance timelines.
  • Align references from IPC to Bharatiya Nyaya Sanhita, 2023.

This is India’s first comprehensive regulatory framework specifically targeting synthetic digital manipulation at scale.

“Synthetically Generated Information”

A major addition is the formal definition of:

(a) Audio, Visual or Audio-Visual Information

Expanded to include any content created, generated, modified, or altered using computer resources.

(b) “Synthetically Generated Information” (SGI)

Defined as AI-created or algorithmically altered content that:

  • Appears real or authentic,
  • Depicts individuals or events,
  • Is likely to be perceived as indistinguishable from real-world events .

Exclusions

Routine editing, formatting, accessibility improvements, color correction, or legitimate document preparation are excluded—provided they do not materially distort the underlying content.

Implication: The law clearly separates deepfake manipulation from legitimate digital enhancement.

Expansion of “Information” to Include Synthetic Content

The amendment clarifies that any reference to “information” under unlawful activity provisions shall include synthetically generated information.

This ensures:

  • Deepfakes are treated on par with real content under IT Act liability provisions.
  • Intermediaries cannot argue regulatory gaps.

Mandatory User Awareness Requirements

Intermediaries must now:

  • Inform users every three months about legal consequences of misuse.
  • Warn about penalties under:
    • Bharatiya Nyaya Sanhita, 2023
    • POCSO Act
    • Representation of the People Act
    • Indecent Representation of Women Act
    • Immoral Traffic Prevention Act .

Users must be informed that violations may result in:

  • Immediate content removal.
  • Account suspension.
  • Identity disclosure to victims.
  • Mandatory reporting to authorities.

Due Diligence Obligations for Synthetic Content Platforms

Platforms that enable AI content creation must:

(A) Prevent Illegal SGI

Deploy automated tools and reasonable technical measures to prevent the creation of synthetic content that:

  • Contains child sexual abuse material.
  • Is obscene, pornographic, or invasive of privacy.
  • Creates false documents or electronic records.
  • Aids in explosives or arms procurement.
  • Falsely depicts individuals or events to deceive.

(B) Mandatory Labeling

All lawful SGI must:

  • Be prominently labelled.
  • Contain visible disclosure (for visual content).
  • Include prefixed disclosure (for audio).
  • Embed permanent metadata or provenance markers.
  • Include a unique identifier tied to the creating platform.

Platforms cannot allow removal or suppression of such labels. This introduces a technical traceability regime for AI content.

Obligations of Significant Social Media Intermediaries (SSMIs)

Before publishing user content, SSMIs must:

  • Require users to declare whether content is synthetic.
  • Deploy verification tools to validate declarations.
  • Ensure labelling if the content is confirmed synthetic.

If the platform knowingly permits unlabeled synthetic content, it will be deemed to have failed due diligence. This shifts liability exposure significantly upward.

Tightened Compliance Timelines

The amendment reduces key timelines:

Provision Earlier Now
Content removal after government order 36 hours 3 hours
Grievance resolution 15 days 7 days
Certain urgent removals 24 hours 2 hours

Safe Harbour Clarification

The amendment clarifies that:

  • Removing or disabling access using automated tools,
  • Acting upon awareness of violations,

will not amount to breach of safe harbour protections under Section 79 of the IT Act. This legally protects proactive moderation.

Replacement of IPC Reference

All references to Indian Penal Code are replaced with Bharatiya Nyaya Sanhita, 2023 .

This harmonizes the IT Rules with the new criminal code framework.

Legal and Regulatory Impact

(A) On Social Media Platforms

  • Mandatory AI detection systems.
  • High compliance burden.
  • Increased liability risk.
  • Faster takedown obligations.

(B) On AI Tools and Generative Platforms

  • Must embed watermarking or metadata.
  • Cannot allow deepfake misuse.
  • Must prevent illegal outputs at the generation level.

(C) On Users

  • Criminal exposure for malicious deepfakes.
  • Reduced anonymity if violations occur.
  • Increased traceability.

Policy Significance

The 2026 Amendment Rules:

  • Represent India’s most stringent deepfake regulation.
  • Introduce technical compliance standards for AI.
  • Combine content moderation with metadata traceability.
  • Shift from reactive moderation to preventive architecture.

India now formally regulates not just harmful content, but the mechanism of creation itself.

The IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 fundamentally reshape digital compliance in India.

By:

  • Defining synthetic information,
  • Mandating labelling and metadata,
  • Tightening removal timelines,
  • Expanding intermediary liability,
  • And aligning with the Bharatiya Nyaya Sanhita,

the government has moved decisively toward regulating AI-generated content ecosystems.

For platforms operating in India, compliance will require:

  • AI moderation infrastructure,
  • Provenance tagging systems,
  • Real-time content risk detection,
  • Stronger legal and compliance governance.

Authored by
Vijay Pal Dalmia
, Advocate
Supreme Court of India & Delhi High Court

Email id: [email protected]
Mobile No.: +91 9810081079

Linkedin: https://www.linkedin.com/in/vpdalmia/
Facebook: https://www.facebook.com/vpdalmia
X (Twitter): @vpdalmia

Madras High Court Rejects P&G’s Challenge to “VAPORIN” Trademarks; Holds “Vapo” to Be Publici Juris

In The Procter and Gamble Company v. IPI India Private Limited (O.P.(TM) Nos. 48, 49 and 50 of 2024), the Madras High Court dismissed trademark rectification petitions filed by P&G, holding that the term “Vapo” is descriptive, common to the trade, and publici juris, and therefore incapable of exclusive appropriation.

The Court found that “Vapo”, derived from “vapour”, is widely used in relation to vapour-based medicinal products and has seen extensive third-party adoption. It further held that “VICKS VAPORUB” and “VAPORIN” are phonetically, visually, and structurally distinct, with differing trade dress and overall commercial impression, making the likelihood of consumer confusion remote.

For any clarification, please write to [email protected]

SEBI Mandates Digital Accessibility – Inclusive Markets, Clear Compliance Framework

SEBI has issued a series of circulars requiring regulated entities and market infrastructure institutions to ensure full digital accessibility across websites, mobile applications, KYC processes, and investor-facing platforms, in line with statutory obligations and the Supreme Court’s recognition of digital access as a fundamental right.

Through defined accessibility standards, extended compliance timelines, structured audit and reporting mechanisms, and investor grievance redressal via SCORES, SEBI reinforces accountability, transparency, and inclusive participation in India’s securities markets.

To read the July 31, 2025 circular click here

To read the August 29, 2025 circular click here

To read the September 25, 2025 circular click here

To read the December 08, 2025 circular click here

For any clarification, please write to [email protected]

Delhi High Court Declines Interim Relief in Alkem’s “A TO Z” Trademark Dispute

In Alkem Laboratories Limited v. Prevego Healthcare and Research Pvt. Ltd. (2026:DHC:411), the Delhi High Court refused to grant an interim injunction restraining the use of “MULTIVEIN AZ”, rejecting Alkem’s claim of exclusivity over the letters “A” and “Z”.

The Court held that individual alphabet letters and generic expressions such as “A TO Z”, particularly in the context of multivitamin products, cannot be monopolised and that registration of a device mark protects only its distinctive visual representation, not its constituent elements. It further noted that material non-disclosure of earlier trademark applications disentitled the plaintiff from equitable relief.

For any clarification, please write to [email protected]

RBI Tightens Related Party Lending Norms – Stronger Governance, Clearer Guardrails

The Reserve Bank of India has notified comprehensive amendment directions on lending to related parties by regulated entities, effective 1 April 2026, significantly strengthening governance, oversight, and disclosure standards.

By expanding definitions, revising materiality thresholds, extending board-level oversight mechanisms, and strengthening monitoring and reporting requirements, RBI reinforces transparency, accountability, and prudential discipline in related party transactions.

For any clarification, please write to [email protected]