- More
- Back
India IT Rules Before and After Amendment & AI-Generated Content February 24, 2026
Published in: Articles
The Information Technology framework in India regulates how digital platforms, social media, online gaming services, and intermediaries function. With rapid growth of social media, AI-generated content, and online gaming, earlier rules became insufficient to handle new digital challenges.
Information Technology Rules (Before Amendment)
Before the February 2026 amendments, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021[1] mainly aimed to make the internet safer and more accountable by placing responsibilities on social media platforms and other online intermediaries. These rules required platforms to clearly inform users about acceptable online behaviour and prohibited content. Platforms were also required to remove unlawful or harmful content after receiving orders from courts or government authorities. Special protection measures were introduced for women and children, and large social media companies had to appoint officers in India to handle complaints and ensure compliance. Users were also given a formal system to file complaints if harmful content affected them. Overall, the rules focused mainly on removing illegal content after complaints were received rather than preventing harmful content beforehand.
Key Features Before Amendment:
Challenges Faced by IT Rules Before the February 2026 Amendment
Despite creating a framework for regulating online content, the IT Rules, 2021 faced several practical and legal challenges due to rapid technological changes, especially with the rise of artificial intelligence and deepfake technologies. One major concern was that harmful or misleading content often spreads within minutes on social media, making the earlier 36-hour content removal timeline insufficient to prevent damage. The rules also lacked clear mechanisms to regulate AI-generated or synthetically created content such as deepfake videos and voice cloning, creating regulatory gaps in tackling modern digital threats.
Another significant issue was the absence of mandatory labelling standards for AI-generated content, making it difficult for ordinary users to distinguish between genuine and manipulated media. Additionally, many users found the grievance redressal system slow or ineffective, often forcing them to approach courts for relief when platform responses were unsatisfactory. The rules were also legally challenged in several courts on the grounds that certain provisions might restrict freedom of speech or go beyond the authority granted under the parent IT Act.
Further, the requirement for messaging platforms to identify the first originator of certain messages raised privacy concerns, as critics argued it could weaken end-to-end encryption protections. Finally, platforms themselves faced operational difficulties in moderating the enormous volume of online content, often struggling to differentiate harmful material from legitimate satire, parody, or creative expression without advanced technological tools.
Major Key Challenges faced were:
Overall, these challenges highlighted the need for stronger and updated regulations, eventually leading to the February 2026 amendments that aimed to address these growing digital risks more effectively.
Need for Amendment
Need for Amendment (Leading to the February 2026 Changes)
Although the IT Rules, 2021 created an important framework to regulate online platforms and social media, rapid technological developments soon exposed weaknesses in the system. The rise of artificial intelligence, deepfakes, voice cloning, and synthetic media made it easier to spread misinformation, commit online fraud, and misuse personal images or identities. The earlier rules were mainly designed to deal with traditional harmful content and were not strong enough to handle these new digital risks.
One major issue was the speed at which harmful content spreads online. Fake videos or manipulated media could go viral within minutes, influencing public opinion, damaging reputations, or causing panic. However, platforms were allowed up to 36 hours to remove unlawful content after receiving government or court orders. By that time, the damage was often already irreversible.
Another serious gap was the absence of specific laws dealing with AI-generated or synthetically created content. There were no clear obligations on platforms to detect or regulate deepfakes or voice cloning technologies. At the same time, users had no reliable way to identify whether a video, image, or audio clip was real or artificially generated, because platforms were not required to label AI-created content or attach digital identification markers.
Users also faced problems with the grievance redressal system, which was often slow or ineffective. Many people had to approach courts directly when their complaints were not resolved properly by platforms. In addition, certain provisions of the rules were legally challenged in courts for possibly restricting freedom of speech or exceeding powers granted under the parent IT Act.
Privacy concerns also emerged due to the rule requiring messaging platforms to identify the first originator of certain messages in serious cases. Critics argued that such traceability requirements could weaken end-to-end encryption and threaten user privacy.
Another practical difficulty was the massive scale of online content, making it difficult for platforms to accurately differentiate between harmful material and legitimate satire, parody, journalism, or creative expression without advanced technology.
The February 2026 amendment[2] was therefore introduced to address what many described as a period of poorly regulated AI and digital manipulation. Deepfakes were increasingly being used for financial fraud, political misinformation, identity misuse, and non-consensual intimate imagery, creating urgent need for stricter regulation.
To prevent harmful content from going viral, the amendment introduced much faster takedown timelines, including a three-hour removal requirement for government or court-declared unlawful content and even faster action for highly sensitive deepfake material.
The amendment also aimed to increase transparency and accountability by requiring platforms to introduce labelling and digital fingerprinting mechanisms so users could distinguish between real and AI-generated content. Concerns about manipulation during elections through AI-cloned voices and fake videos further strengthened the need for regulation to protect democratic processes.
Additionally, the amendment clarified legal responsibilities of platforms by stating that failure to comply with new due diligence obligations could result in the loss of safe harbour protection, making platforms legally responsible for harmful content hosted on their services. The update also aligned references from the old Indian Penal Code to the newer Bharatiya Nyaya Sanhita, 2023,[3] ensuring consistency with India’s modern criminal law framework.
In simple terms, the amendment became necessary because the earlier rules were not strong enough to deal with modern digital threats. The new changes aim to create a safer online environment while balancing freedom of expression, accountability of platforms, and protection of users from digital harm.
IT Rules After Amendment (Post–February 2026 Framework)
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified on 10 February 2026 and brought into force from 20 February 2026 by Ministry of Electronics and Information Technology, mark a significant shift in India’s digital regulation policy. Unlike the earlier framework that mainly reacted after harmful content appeared online, the amended rules move toward proactive regulation, especially addressing risks created by artificial intelligence and deepfake technologies.
The amendment recognizes that modern digital threats spread extremely quickly and therefore introduces stricter obligations, faster enforcement timelines, and clearer accountability mechanisms for social media platforms and other intermediaries.
For the first time in India, the law formally recognizes Synthetically Generated Information (SGI)[4] – content such as videos, images, or audio that is created or altered using artificial intelligence but appears real.
To prevent misuse of such technology, the amendment introduces:
In simple terms, users should now be able to tell whether content is real or AI-generated.
One of the biggest changes addresses how fast harmful content spreads online. To prevent viral misinformation or abuse, platforms must now act much faster:
This ensures that harmful content is controlled before it spreads widely.
Platforms with more than five million users, classified as Significant Social Media Intermediaries (SSMIs), now face stricter duties even before content becomes public.
New obligations include:
This change makes platforms more accountable rather than allowing them to simply react after damage occurs.
Platforms must now remind users every three months – instead of once a year, about platform rules and the legal consequences of posting unlawful content including possible criminal liability under updated criminal laws. This step aims to increase public awareness and encourage responsible online behaviour.
Overall Impact in Simple Terms
In simple language, the new amendments mean:
Overall, the amendment moves India’s digital regulation from a reactive system to a preventive and accountability-driven framework, aiming to protect users while maintaining transparency and trust in online spaces.
Comparative Analysis
| Aspect | Before Amendment | After Amendment |
| Approach | Reactive removal of content | Proactive regulation of AI content |
| Takedown Timeline | 36 hours | 3 hours/ 2 hours for urgent cases |
| AI/Deepfake Regulation | Not clearly regulated | Mandatory labelling & traceability |
| User complaints | 15 days resolution | 7 days, urgent in 36 hrs |
| Platform Liability | Limited | Higher liability for non – compliance |
| Platform Duties | Basic Compliance | Pre-publication checks & verification |
The February 2026 amendments significantly increase compliance responsibilities for online platforms by requiring faster removal of unlawful content and proactive regulation of AI-generated media. Platforms must now respond quickly to complaints, verify AI-generated content, and maintain greater transparency to avoid legal liability. For businesses, this means strengthening internal compliance and monitoring systems. Overall, the amendments aim to create a safer and more trustworthy digital environment while ensuring clearer accountability for online intermediaries.
Practical Impact:
The 2026 amendments mark a clear shift from the earlier “wait and act later” model to a system where platforms must actively monitor and respond to harmful content, particularly AI-generated material. The changes affect not only large technology companies but also content creators, users, and regulators. The practical consequences can be understood as follows:
The amendments significantly increase compliance pressure on digital platforms operating in India. The earlier 36-hour response window has been replaced with a much stricter three-hour timeline for removal of unlawful content upon official direction. Failure to comply can result in loss of safe harbour protection, potentially exposing platforms to direct legal liability for user content.
As a result, many platforms are strengthening automated detection tools to identify AI-generated or manipulated content in real time and are expanding compliance teams to ensure immediate response to government or court orders. Several companies are also establishing round-the-clock response mechanisms in India to handle urgent takedown requests and regulatory communications.
Content creators and influencers must now exercise greater caution when using AI tools such as voice cloning, face-swapping, or synthetic video generation. AI-generated or altered content is required to carry proper disclosure, and failure to provide such disclosure may lead to content removal or account penalties.
At the same time, creators working in satire or parody sometimes face practical challenges, as even humorous or artistic content may require labelling if AI tools are used, potentially affecting creative presentation.
For ordinary users, the amendments aim to provide faster protection against harmful online content. Victims of non-consensual deepfake imagery or similar misuse can now expect quicker removal of such content, reducing the period during which harmful material circulates online.
Users are also likely to see clearer indicators or labels identifying AI-generated or manipulated content, helping them distinguish authentic information from synthetic media and reducing the risk of falling victim to online fraud or misinformation. However, ongoing debates continue regarding privacy implications, particularly where traceability mechanisms may interact with encrypted communication services.
From an enforcement perspective, the amendments provide authorities with faster mechanisms to address harmful or unlawful online content, especially during sensitive situations such as elections or public emergencies. The shortened timelines allow quicker intervention to prevent misinformation or harmful material from spreading widely.
Compliance Framework Under the IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026
The 2026 Amendments significantly strengthen the compliance obligations of digital platforms, especially in relation to AI-generated and harmful online content. The focus has shifted from reactive moderation to proactive responsibility, requiring platforms to adopt faster response systems, stronger monitoring tools, and clearer accountability mechanisms. An overview of the compliance framework is set out below:
Platforms must now act within clearly defined timelines:
These timelines aim to reduce the viral spread of harmful content and provide quicker relief to victims.
Where platforms allow AI-generated content:
Additionally, platforms must automatically block or prevent uploads involving:
Platforms with over 5 million users in India face stricter responsibilities:
To ensure user awareness and regulatory accountability:
The amendments make it clear that platforms are no longer mere intermediaries but are expected to actively safeguard the digital ecosystem. Compliance now requires:
The Way Forward
Looking ahead, several developments are likely to shape how these rules operate in practice:
Platforms will need to move beyond basic filters and invest in advanced AI-detection systems capable of identifying synthetic content automatically. Industry adoption of global content authenticity standards and metadata verification tools will become essential rather than optional.
As disputes arise, courts will need to clarify where legitimate creative expression ends and harmful synthetic manipulation begins. Clear judicial guidance will be necessary to protect satire, parody, and artistic freedom while preventing malicious misuse.
Since AI-generated content easily crosses borders, India’s regulatory push may encourage international cooperation on traceability and authenticity standards so that safeguards work consistently across jurisdictions.
Rules alone cannot eliminate misinformation. Public awareness campaigns will be necessary so users learn to recognize AI labels and authenticity indicators just as easily as they recognize verified accounts today.
These amendments are widely seen as an interim step toward the forthcoming Digital India Act, which is expected to create a broader and more permanent framework governing online platforms, digital rights, and emerging technologies.
Authored by
Vijay Pal Dalmia, Advocate
Supreme Court of India & Delhi High Court
Email id: [email protected]
Mobile No.: +91 9810081079
Linkedin: https://www.linkedin.com/in/vpdalmia/
Facebook: https://www.facebook.com/vpdalmia
X (Twitter): @vpdalmia
[1] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
[2] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified by MeitY on 10 February 2026.
[3] Bharatiya Nyaya Sanhita (BNS), 2023, Government of India – replaces IPC references in digital content regulation.
[4] IT Rules Amendment, 2026 – provisions relating to Synthetically Generated Information (SGI)
[5] IT Rules Amendment, 2026 – amended due diligence obligations regarding expedited takedown timelines.
[6] Section 79, Information Technology Act, 2000 – Safe Harbour protection for intermediaries.