Legalaxy – Monthly Newsletter Series – Vol XXXIV – March, 2026

In the March edition of our monthly newsletter “Legalaxy”, our team analyses some of the key developments in securities market, banking and finance, labour, information technology and aviation.

Below are the key highlights of the newsletter:

SEBI UPDATES

  • Framework for reporting of value of AIF units to depositories
  • Disclosure by SEBI regulated entities and their agents on social media platforms

RBI & IFSC UPDATES

  • IFSCA issues directions for obtaining ISIN from a recognised depository in IFSC
  • RBI introduces amendment to the Borrowing and Lending Regulations, 2018
  • IFSCA specifies the format of the net worth certificate and checklist for conducting audit of GAPs
  • IFSCA specifies the process for unified registration for multiple capital market activities under the CMI Regulations

LABOUR UPDATES

  • Government clarifies functions of tribunals and authorities under the Industrial Relations Code

OTHER UPDATES

  • DPIIT amends the definition of startups
  • Accessibility of content enhanced for persons with hearing and visual impairment
  • Government tightens regulatory framework on AI-generated content
  • Ministry of Civil Aviation notifies the Aircraft (Carriage of Dangerous Goods) Rules, 2026

We hope you like our publication. We look forward to your suggestions.

Please feel free to contact us at [email protected]

RBI Notifies Consolidated Export–Import Framework under FEMA – Tighter Monitoring, Simplified Compliance

The Reserve Bank of India has notified the Foreign Exchange Management (Export and Import of Goods and Services) Regulations, 2026, effective 1 October 2026, introducing a unified framework governing cross-border trade in goods and services.

The Regulations integrate export and import provisions into a single regime, streamline declaration and digital reporting requirements through systems such as EDPMS/IDPMS, and clarify timelines for export realisation, repatriation and foreign exchange settlement. They also formalise monitoring requirements, including merchanting trade transactions, and set out clear responsibilities for Authorised Dealer banks in reporting, compliance procedures and internal governance.

At the same time, the framework introduces procedural flexibility and scaled compliance relief for certain low-value transactions, permits structured set-offs and third-party payments subject to appropriate safeguards, and provides clarity on the conditions for reduction or non-realisation of export proceeds.

Overall, the revised regime seeks to balance trade facilitation with stronger regulatory supervision in India’s foreign exchange ecosystem.

To view the regulations, click here:

For any clarification, please write to [email protected]

RBI Rationalises External Commercial Borrowing Framework- Wider Access, Greater Flexibility, Market-Linked Pricing

The Reserve Bank of India has notified the Foreign Exchange Management (Borrowing and Lending) (First Amendment) Regulations, 2026, significantly liberalising the External Commercial Borrowing (ECB) regime. The amendments broaden the pool of eligible borrowers and recognised lenders, replace the all-in-cost cap with a market-linked benchmark, standardise minimum maturity requirements, enhance borrowing limits, and introduce procedural relaxations including streamlined reporting.

With clearer end-use norms, recognition of arm’s length principles in related-party ECBs, and a structured framework for compliance and borrower traceability, the revised regulations aim to facilitate responsible capital inflows while strengthening regulatory oversight.

For any clarification, please write to [email protected]

India IT Rules Before and After Amendment & AI-Generated Content

The Information Technology framework in India regulates how digital platforms, social media, online gaming services, and intermediaries function. With rapid growth of social media, AI-generated content, and online gaming, earlier rules became insufficient to handle new digital challenges.

Information Technology Rules (Before Amendment)

Before the February 2026 amendments, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021[1] mainly aimed to make the internet safer and more accountable by placing responsibilities on social media platforms and other online intermediaries. These rules required platforms to clearly inform users about acceptable online behaviour and prohibited content. Platforms were also required to remove unlawful or harmful content after receiving orders from courts or government authorities. Special protection measures were introduced for women and children, and large social media companies had to appoint officers in India to handle complaints and ensure compliance. Users were also given a formal system to file complaints if harmful content affected them. Overall, the rules focused mainly on removing illegal content after complaints were received rather than preventing harmful content beforehand.

Key Features Before Amendment:

  • Due diligence by platforms: Social media platforms had to publish rules and inform users not to post illegal or harmful content.
  • Content removal rules: Platforms had to remove unlawful content within 36 hours after receiving government or court directions.
  • Protection for women & children: Content involving private images or non-consensual intimate content had to be removed within 24 hours.
  • Extra rules for large platforms: Big platforms (with more than 50 lakh users) had to appoint compliance officers and publish monthly reports showing action taken on complaints.
  • Traceability requirement: Messaging platforms could be asked to identify the first sender of harmful messages in serious cases like national security threats or sexual crimes.
  • User complaint system: Users could complain to a Grievance Officer, and if not satisfied, appeal before a government-appointed appellate body.
  • Reactive approach: The system mainly worked after harmful content appeared, rather than preventing it beforehand.

Challenges Faced by IT Rules Before the February 2026 Amendment

Despite creating a framework for regulating online content, the IT Rules, 2021 faced several practical and legal challenges due to rapid technological changes, especially with the rise of artificial intelligence and deepfake technologies. One major concern was that harmful or misleading content often spreads within minutes on social media, making the earlier 36-hour content removal timeline insufficient to prevent damage. The rules also lacked clear mechanisms to regulate AI-generated or synthetically created content such as deepfake videos and voice cloning, creating regulatory gaps in tackling modern digital threats.

Another significant issue was the absence of mandatory labelling standards for AI-generated content, making it difficult for ordinary users to distinguish between genuine and manipulated media. Additionally, many users found the grievance redressal system slow or ineffective, often forcing them to approach courts for relief when platform responses were unsatisfactory. The rules were also legally challenged in several courts on the grounds that certain provisions might restrict freedom of speech or go beyond the authority granted under the parent IT Act.

Further, the requirement for messaging platforms to identify the first originator of certain messages raised privacy concerns, as critics argued it could weaken end-to-end encryption protections. Finally, platforms themselves faced operational difficulties in moderating the enormous volume of online content, often struggling to differentiate harmful material from legitimate satire, parody, or creative expression without advanced technological tools.

Major Key Challenges faced were:

  • Harmful content spreads faster than platforms could remove it.
  • No strong rules existed to regulate AI-generated or deepfake content.
  • Users could not easily identify fake or AI-generated media.
  • Complaint systems were slow and often ineffective.
  • Rules faced court challenges over free speech concerns.
  • Traceability rules raised privacy and encryption concerns.
  • Platforms struggled to moderate massive amounts of online content accurately.

Overall, these challenges highlighted the need for stronger and updated regulations, eventually leading to the February 2026 amendments that aimed to address these growing digital risks more effectively.

Need for Amendment

Need for Amendment (Leading to the February 2026 Changes)

Although the IT Rules, 2021 created an important framework to regulate online platforms and social media, rapid technological developments soon exposed weaknesses in the system. The rise of artificial intelligence, deepfakes, voice cloning, and synthetic media made it easier to spread misinformation, commit online fraud, and misuse personal images or identities. The earlier rules were mainly designed to deal with traditional harmful content and were not strong enough to handle these new digital risks.

One major issue was the speed at which harmful content spreads online. Fake videos or manipulated media could go viral within minutes, influencing public opinion, damaging reputations, or causing panic. However, platforms were allowed up to 36 hours to remove unlawful content after receiving government or court orders. By that time, the damage was often already irreversible.

Another serious gap was the absence of specific laws dealing with AI-generated or synthetically created content. There were no clear obligations on platforms to detect or regulate deepfakes or voice cloning technologies. At the same time, users had no reliable way to identify whether a video, image, or audio clip was real or artificially generated, because platforms were not required to label AI-created content or attach digital identification markers.

Users also faced problems with the grievance redressal system, which was often slow or ineffective. Many people had to approach courts directly when their complaints were not resolved properly by platforms. In addition, certain provisions of the rules were legally challenged in courts for possibly restricting freedom of speech or exceeding powers granted under the parent IT Act.

Privacy concerns also emerged due to the rule requiring messaging platforms to identify the first originator of certain messages in serious cases. Critics argued that such traceability requirements could weaken end-to-end encryption and threaten user privacy.

Another practical difficulty was the massive scale of online content, making it difficult for platforms to accurately differentiate between harmful material and legitimate satire, parody, journalism, or creative expression without advanced technology.

The February 2026 amendment[2] was therefore introduced to address what many described as a period of poorly regulated AI and digital manipulation. Deepfakes were increasingly being used for financial fraud, political misinformation, identity misuse, and non-consensual intimate imagery, creating urgent need for stricter regulation.

To prevent harmful content from going viral, the amendment introduced much faster takedown timelines, including a three-hour removal requirement for government or court-declared unlawful content and even faster action for highly sensitive deepfake material.

The amendment also aimed to increase transparency and accountability by requiring platforms to introduce labelling and digital fingerprinting mechanisms so users could distinguish between real and AI-generated content. Concerns about manipulation during elections through AI-cloned voices and fake videos further strengthened the need for regulation to protect democratic processes.

Additionally, the amendment clarified legal responsibilities of platforms by stating that failure to comply with new due diligence obligations could result in the loss of safe harbour protection, making platforms legally responsible for harmful content hosted on their services. The update also aligned references from the old Indian Penal Code to the newer Bharatiya Nyaya Sanhita, 2023,[3] ensuring consistency with India’s modern criminal law framework.

In simple terms, the amendment became necessary because the earlier rules were not strong enough to deal with modern digital threats. The new changes aim to create a safer online environment while balancing freedom of expression, accountability of platforms, and protection of users from digital harm.

IT Rules After Amendment (Post–February 2026 Framework)

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified on 10 February 2026 and brought into force from 20 February 2026 by Ministry of Electronics and Information Technology, mark a significant shift in India’s digital regulation policy. Unlike the earlier framework that mainly reacted after harmful content appeared online, the amended rules move toward proactive regulation, especially addressing risks created by artificial intelligence and deepfake technologies.

The amendment recognizes that modern digital threats spread extremely quickly and therefore introduces stricter obligations, faster enforcement timelines, and clearer accountability mechanisms for social media platforms and other intermediaries.

  1. Regulation of Synthetically Generated Information (SGI)

For the first time in India, the law formally recognizes Synthetically Generated Information (SGI)[4] – content such as videos, images, or audio that is created or altered using artificial intelligence but appears real.

To prevent misuse of such technology, the amendment introduces:

  • Mandatory labelling: AI-generated content must clearly display visible labels so users know that the content is artificial.
  • Audio disclosures: AI-generated audio must include clear voice disclosures so listeners understand it is synthetic.
  • Traceability measures: Platforms must embed permanent digital identifiers (metadata or fingerprints) into AI-generated files to help trace their origin if misuse occurs.
  • Tampering prohibited: Removing or disabling these identifiers is strictly prohibited.
  • Reasonable exemptions: Normal editing practices like colour correction, noise reduction, accessibility tools, or academic training material that do not create false impressions are excluded from SGI regulation.

In simple terms, users should now be able to tell whether content is real or AI-generated.

  1. Faster Takedown and Complaint Resolution Timelines

One of the biggest changes addresses how fast harmful content spreads online. To prevent viral misinformation or abuse, platforms must now act much faster:

  • 3-hour takedown rule: Platforms must remove or block unlawful content within three hours of receiving a government or court order (earlier allowed 36 hours).
  • 2-hour emergency removal: Highly sensitive content such as deepfake nudity or intimate imagery must be removed within two hours.[5]
  • Faster complaint handling: User complaints must now be acknowledged and resolved within seven days instead of fifteen.
  • Urgent cases: Complaints related to identity theft or serious harm must be resolved within 36 hours.

This ensures that harmful content is controlled before it spreads widely.

  1. Stronger Responsibilities for Large Platforms (SSMIs)

Platforms with more than five million users, classified as Significant Social Media Intermediaries (SSMIs), now face stricter duties even before content becomes public.

New obligations include:

  • User disclosure: Users uploading content must declare whether it is AI-generated.
  • Technical verification: Platforms must use automated tools to verify such declarations before allowing publication.
  • Risk of liability: If platforms fail to label AI-generated content or miss the strict takedown timelines, they lose safe harbour protection under Section 79 of the IT Act.[6] This means they can become legally responsible for harmful user content.

This change makes platforms more accountable rather than allowing them to simply react after damage occurs.

  1. Regular User Awareness Notices

Platforms must now remind users every three months – instead of once a year, about platform rules and the legal consequences of posting unlawful content including possible criminal liability under updated criminal laws. This step aims to increase public awareness and encourage responsible online behaviour.

Overall Impact in Simple Terms

In simple language, the new amendments mean:

  • AI-generated fake videos and audio are now directly regulated.
  • Users must be informed when content is artificial.
  • Harmful content must be removed much faster.
  • Complaint systems must respond quicker.
  • Large platforms must check content more carefully.
  • Platforms can be held legally responsible if they fail to follow the rules.

Overall, the amendment moves India’s digital regulation from a reactive system to a preventive and accountability-driven framework, aiming to protect users while maintaining transparency and trust in online spaces.

Comparative Analysis

Aspect Before Amendment After Amendment
Approach Reactive removal of content Proactive regulation of AI content
Takedown Timeline 36 hours 3 hours/ 2 hours for urgent cases
AI/Deepfake Regulation Not clearly regulated Mandatory labelling & traceability
User complaints 15 days resolution 7 days, urgent in 36 hrs
Platform Liability Limited Higher liability for non – compliance
Platform Duties Basic Compliance Pre-publication checks & verification

 

The February 2026 amendments significantly increase compliance responsibilities for online platforms by requiring faster removal of unlawful content and proactive regulation of AI-generated media. Platforms must now respond quickly to complaints, verify AI-generated content, and maintain greater transparency to avoid legal liability. For businesses, this means strengthening internal compliance and monitoring systems. Overall, the amendments aim to create a safer and more trustworthy digital environment while ensuring clearer accountability for online intermediaries.

Practical Impact:

The 2026 amendments mark a clear shift from the earlier “wait and act later” model to a system where platforms must actively monitor and respond to harmful content, particularly AI-generated material. The changes affect not only large technology companies but also content creators, users, and regulators. The practical consequences can be understood as follows:

  1. Impact on Social Media Platforms and Intermediaries

The amendments significantly increase compliance pressure on digital platforms operating in India. The earlier 36-hour response window has been replaced with a much stricter three-hour timeline for removal of unlawful content upon official direction. Failure to comply can result in loss of safe harbour protection, potentially exposing platforms to direct legal liability for user content.

As a result, many platforms are strengthening automated detection tools to identify AI-generated or manipulated content in real time and are expanding compliance teams to ensure immediate response to government or court orders. Several companies are also establishing round-the-clock response mechanisms in India to handle urgent takedown requests and regulatory communications.

  1. Impact on Content Creators and Influencers

Content creators and influencers must now exercise greater caution when using AI tools such as voice cloning, face-swapping, or synthetic video generation. AI-generated or altered content is required to carry proper disclosure, and failure to provide such disclosure may lead to content removal or account penalties.

At the same time, creators working in satire or parody sometimes face practical challenges, as even humorous or artistic content may require labelling if AI tools are used, potentially affecting creative presentation.

  1. Impact on General Users

For ordinary users, the amendments aim to provide faster protection against harmful online content. Victims of non-consensual deepfake imagery or similar misuse can now expect quicker removal of such content, reducing the period during which harmful material circulates online.

Users are also likely to see clearer indicators or labels identifying AI-generated or manipulated content, helping them distinguish authentic information from synthetic media and reducing the risk of falling victim to online fraud or misinformation. However, ongoing debates continue regarding privacy implications, particularly where traceability mechanisms may interact with encrypted communication services.

  1. Impact on Government Enforcement

From an enforcement perspective, the amendments provide authorities with faster mechanisms to address harmful or unlawful online content, especially during sensitive situations such as elections or public emergencies. The shortened timelines allow quicker intervention to prevent misinformation or harmful material from spreading widely.

Compliance Framework Under the IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026

The 2026 Amendments significantly strengthen the compliance obligations of digital platforms, especially in relation to AI-generated and harmful online content. The focus has shifted from reactive moderation to proactive responsibility, requiring platforms to adopt faster response systems, stronger monitoring tools, and clearer accountability mechanisms.                                  An overview of the compliance framework is set out below:

  1. Strict Content Removal Timelines

Platforms must now act within clearly defined timelines:

  • Content must be removed within 3 hours when directed by a court order or government authority.
  • In urgent situations, such as non-consensual deepfake nudity or intimate imagery, action must be taken within 2 hours.
  • User complaints must be acknowledged within 7 days.
  • Serious complaints such as identity theft or impersonation must be resolved within 36 hours.

These timelines aim to reduce the viral spread of harmful content and provide quicker relief to victims.

  1. Mandatory Labelling and Traceability of AI Content

Where platforms allow AI-generated content:

  • Synthetic images and videos must carry clear labels informing viewers that the content is AI-generated.
  • Synthetic audio must include audio disclosures or visible warnings.
  • Platforms must embed metadata or unique identifiers to help trace the origin of synthetic content.
  • Such identifiers must be protected so users cannot remove or alter them.

Additionally, platforms must automatically block or prevent uploads involving:

  • Child sexual abuse material (CSAM), and
  • Non-consensual intimate imagery.
  1. Additional Duties for Large Platforms

Platforms with over 5 million users in India face stricter responsibilities:

  • Users must declare whether uploaded content is AI-generated.
  • Platforms must deploy automated tools to verify such declarations.
  • Failure to comply risks losing Section 79 Safe Harbour protection, exposing the platform to direct legal liability.
  1. Transparency and Reporting Obligations

To ensure user awareness and regulatory accountability:

  • Platforms must send periodic user notifications explaining platform policies and legal consequences of misuse.
  • Legal references must be updated to reflect new criminal law provisions, including replacing references to the IPC with the Bharatiya Nyaya Sanhita, 2023.
  • Serious offences involving synthetic content must be reported immediately to authorities.
  • Detailed records of content takedowns must be maintained for at least 180 days.
  1. Overall Compliance Expectation

The amendments make it clear that platforms are no longer mere intermediaries but are expected to actively safeguard the digital ecosystem. Compliance now requires:

  • Real-time moderation capabilities,
  • Robust user grievance systems,
  • Clear AI-content identification mechanisms, and
  • Continuous monitoring and reporting practices.

 

The Way Forward

Looking ahead, several developments are likely to shape how these rules operate in practice:

  1. Technology Will Drive Compliance

Platforms will need to move beyond basic filters and invest in advanced AI-detection systems capable of identifying synthetic content automatically. Industry adoption of global content authenticity standards and metadata verification tools will become essential rather than optional.

  1. Courts Will Play a Crucial Role

As disputes arise, courts will need to clarify where legitimate creative expression ends and harmful synthetic manipulation begins. Clear judicial guidance will be necessary to protect satire, parody, and artistic freedom while preventing malicious misuse.

  1. Need for Global Coordination

Since AI-generated content easily crosses borders, India’s regulatory push may encourage international cooperation on traceability and authenticity standards so that safeguards work consistently across jurisdictions.

  1. Public Awareness Must Complement Regulation

Rules alone cannot eliminate misinformation. Public awareness campaigns will be necessary so users learn to recognize AI labels and authenticity indicators just as easily as they recognize verified accounts today.

  1. Transition Toward a Comprehensive Digital Law

These amendments are widely seen as an interim step toward the forthcoming Digital India Act, which is expected to create a broader and more permanent framework governing online platforms, digital rights, and emerging technologies.

 

Authored by

Vijay Pal Dalmia, Advocate

Supreme Court of India & Delhi High Court

Email id: [email protected]
Mobile No.: +91 9810081079

Linkedin: https://www.linkedin.com/in/vpdalmia/
Facebook: https://www.facebook.com/vpdalmia
X (Twitter): @vpdalmia

[1] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021

[2] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified by MeitY on 10 February 2026.

[3] Bharatiya Nyaya Sanhita (BNS), 2023, Government of India – replaces IPC references in digital content regulation.

[4] IT Rules Amendment, 2026 – provisions relating to Synthetically Generated Information (SGI)

[5] IT Rules Amendment, 2026 – amended due diligence obligations regarding expedited takedown timelines.

[6] Section 79, Information Technology Act, 2000 – Safe Harbour protection for intermediaries.

Legalaxy – Monthly Newsletter Series – Vol XXXIII – February, 2026

In the February edition of our monthly newsletter “Legalaxy”, our team analyses some of the key developments in securities market, banking and finance and environment.

Below are the key highlights of the newsletter:

SEBI UPDATES

  • SEBI simplifies accreditation norms for accredited investors under the AIF framework

RBI & IFSC UPDATES

  • RBI notifies the Foreign Exchange Management (Guarantees) Regulations, 2026
  • RBI notifies the Foreign Exchange Management (Export and Import of Goods and Services) Regulations, 2026
  • IFSCA clarifies on filing of scheme application under third-party fund management arrangement
  • IFSCA notifies the Fund Management Amendment Regulations, 2026

ENVIRONMENTAL UPDATES

  • MoEFCC notifies the Environmental (Protection) Fund Rules, 2026
  • The Water Pollution (Grant, Refusal or Cancellation of Consent) Guidelines, 2025 and the Air Pollution (Grant, Refusal or Cancellation of Consent) Guidelines, 2025 – Amended
  • Common effluent treatment plants exempted from obtaining prior environmental clearance
  • MoEFCC notifies the Solid Waste Management Rules, 2026

We hope you like our publication. We look forward to your suggestions.

Please feel free to contact us at [email protected]

Communication and Furnishing Written Grounds of Arrest in India

The landmark judgment in Pankaj Bansal v. Union of India (2023INSC866) by the Supreme Court India firmly establishes that the communication of grounds of arrest to an arrested person is not merely a procedural formality but a substantive constitutional and statutory safeguard that must be meaningfully discharged. The Court held that henceforth, written grounds of arrest must be furnished to the arrested person as a matter of course and without exception.

Article 22(1) of the Constitution provides the foundational protection against arbitrary arrest and detention. It states:

“No person who is arrested shall be detained in custody without being informed, as soon as may be, of the grounds for such arrest, nor shall he be denied the right to consult, and to be defended by, a legal practitioner of his choice.”

The Supreme Court emphasized that this constitutional provision guarantees a fundamental right to every arrested person. The mode of conveying information of the grounds of arrest must necessarily be meaningful so as to serve the intended purpose.

Reference may also be made to Section 50 Cr.P.C. (now Section 47 BNSS), which provides that every police officer or other person arresting any person without a warrant shall forthwith communicate to him the full particulars of the offence or other grounds for such arrest, as well as Section 50A Cr.P.C. (now Section 48 BNSS), which imposes an obligation on the arresting officer to inform a nominated person about the arrest and mandates that such information must be recorded in a register maintained at the police station.

Section 19(1) of the Prevention of Money Laundering Act, 2002 provides:

“If the Director, Deputy Director, Assistant Director or any other officer authorised in this behalf by the Central Government by general or special order, has on the basis of material in his possession, reason to believe (the reason for such belief to be recorded in writing) that any person has been guilty of an offence punishable under this Act, he may arrest such person and shall, as soon as may be, inform him of the grounds for such arrest.”

The Court noted that Section 19(1) contains two critical components: (i) the requirement of recording in writing the reason to believe that the person is guilty of an offence, and (ii) the obligation to inform the arrested person of the grounds of arrest as soon as may be.

The Supreme Court articulated the higher purpose that the communication of grounds of arrest serves, and held:

“This being the fundamental right guaranteed to the arrested person, the mode of conveying information of the grounds of arrest must necessarily be meaningful so as to serve the intended purpose. It may be noted that Section 45 of the Act of 2002 enables the person arrested under Section 19 thereof to seek release on bail but it postulates that unless the twin conditions prescribed thereunder are satisfied, such a person would not be entitled to grant of bail. It is only if the arrested person has knowledge of these facts that he/she would be in a position to plead and prove before the Special Court that there are grounds to believe that he/she is not guilty of such offence, so as to avail the relief of bail.”

The Court thus recognized that communicating grounds of arrest is intrinsically linked to the arrested person’s ability to exercise the right to bail, which itself is a facet of personal liberty guaranteed under Article 21.

The Court examined Rule 6 of the Prevention of Money Laundering (The Forms and the Manner of Forwarding a Copy of Order of Arrest) Rules, 2005, which prescribes Form III as the format for the Arrest Order. This format explicitly mentions that the arrested person “has been informed of the grounds for such arrest.” The Court found it incongruous that while this written format is followed uniformly across the country, the manner of informing arrestees varies. In some parts, written grounds are furnished, while in others, grounds are merely read out.

The Supreme Court articulated two primary reasons why written grounds of arrest should be furnished as a matter of course and without exception:

Evidentiary Certainty and Avoiding Disputes

The Court observed:

“Firstly, in the event such grounds of arrest are orally read out to the arrested person or read by such person with nothing further and this fact is disputed in a given case, it may boil down to the word of the arrested person against the word of the authorized officer as to whether or not there is due and proper compliance in this regard. Non-compliance in this regard would entail the release of the arrested person straightaway, as held in V. Senthil Balaji. Such a precarious situation is easily avoided, and the consequence thereof can be obviated very simply by furnishing the written grounds of arrest, as recorded by the authorized officer in terms of Section 19(1) PMLA, to the arrested person under due acknowledgement, instead of leaving it to the debatable ipse dixit of the authorized officer.”

This reasoning recognises the practical reality that disputes about compliance can easily arise when grounds are merely communicated orally or permitted to be read.

Enabling Effective Exercise of Right to Seek Bail

The Court’s second and more fundamental reason related to the constitutional objective underlying the communication of grounds:

“The second reason as to why this would be the proper course to adopt is the constitutional objective underlying such information being given to the arrested person. Conveyance of this information is not only to apprise the arrested person of why he/she is being arrested but also to enable such person to seek legal counsel and, thereafter, present a case before the court under Section 45 to seek release on bail, if he/ she so chooses. In this regard, the grounds of arrest in V. Senthil Balaji (2024 INSC 739) are placed on record and we find that the same run into as many as six pages… it would be well-nigh impossible for either Pankaj Bansal or Basant Bansal to record and remember all that they had read or heard being read out for future recall so as to avail legal remedies. More so, as a person who has just been arrested would not be in a calm and collected frame of mind and may be utterly incapable of remembering the contents of the grounds of arrest read by or read out to him/her.”

Based on the above analysis, the Supreme Court held:

“On the above analysis, to give true meaning and purpose to the constitutional and the statutory mandate of Section 19(1) of the Act of 2002 of informing the arrested person of the grounds of arrest, we hold that it would be necessary, henceforth, that a copy of such written grounds of arrest is furnished to the arrested person as a matter of course and without exception.”

The Court explicitly overruled two High Court decisions that had held to the contrary. It declared that the decisions of the Delhi High Court in Moin Akhtar Qureshi v. Union of India (https://indiankanoon.org/doc/190585689/) and the Bombay High Court in Chhagan Chandrakant Bhujbal v. Union of India (https://indiankanoon.org/doc/138917199/) , which held that there was no requirement to furnish written grounds, “do not lay down the correct law”.

Prabir Purkayastha v. State (NCT of Delhi)

The principles laid down in Pankaj Bansal were reiterated and reinforced by the Supreme Court in Prabir Purkayastha v. State (NCT of Delhi) (2024 INSC 414) . The Court held:

“The language used in Article 22(1) and Article 22(5) of the Constitution of India regarding the communication of the grounds is exactly the identical. Neither of the constitutional provisions require that the ‘grounds’ of ‘arrest’ or ‘detention’, as the case may be, must be communicated in writing. Thus, interpretation to this important facet of the fundamental right as made by the Constitution Bench while examining the scope of Article 22(5) of the Constitution of India would ipso facto apply to Article 22(1) of the Constitution of India insofar as the requirement to communicate the grounds of arrest is concerned. Hence, we have no hesitation in reiterating that the requirement to communicate the grounds of arrest or the grounds of detention in writing to a person arrested in connection with an offence or a person placed under preventive detention as provided under Articles 22(1) and 22(5) of the Constitution of India is sacrosanct and cannot be breached under any situation. Non-compliance of this constitutional requirement and statutory mandate would lead to the custody or the detention being rendered illegal, as the case may be.”

Vihaan Kumar v. State of Haryana

The landmark decision in Vihaan Kumar v. State of Haryana (2025 INSC 162), delivered on February 7, 2025, further elaborated on the requirements of Article 22(1). Justice Abhay S. Oka, writing for the Bench, laid down comprehensive principles:

“Therefore, as far as Article 22(1) is concerned, compliance can be made by communicating sufficient knowledge of the basic facts constituting the grounds of arrest to the person arrested. The grounds should be effectively and fully communicated to the arrestee in the manner in which he will fully understand the same. Therefore, it follows that the grounds of arrest must be informed in a language which the arrestee understands. That is how, in the case of Pankaj Bansal, this Court held that the mode of conveying the grounds of arrest must necessarily be meaningful so as to serve the intended purpose.”

The Court conclusively held that the requirement of informing grounds of arrest is “not a formality but a mandatory constitutional requirement”.

Consequences of Non-Compliance

The Supreme Court in Vihaan Kumar articulated the severe consequences of failing to comply with Article 22(1):

“Non-compliance with Article 22(1) will be a violation of the fundamental rights of the accused guaranteed by the said Article. Moreover, it will amount to a violation of the right to personal liberty guaranteed by Article 21 of the Constitution. Therefore, non-compliance with the requirements of Article 22(1) vitiates the arrest of the accused. Hence, further orders passed by a criminal court of remand are also vitiated. Needless to add that it will not vitiate the investigation, charge sheet and trial. But, at the same time, filing of chargesheet will not validate a breach of constitutional mandate under Article 22(1).”

The Court further held that when a violation of Article 22(1) is established, “it is the duty of the court to forthwith order the release of the accused. That will be a ground to grant bail even if statutory restrictions on the grant of bail exist”.

Additional Safeguard: Communication to Relatives

Justice Nongmeikapam Kotiswar Singh, in his concurring opinion in Vihaan Kumar, emphasised an additional dimension to the communication requirement under Section 50A of the CrPC:

“The purpose of inserting Section 50A of the CrPC, making it obligatory on the person making arrest to inform about the arrest to the friends, relatives or persons nominated by the arrested person, is to ensure that they would able to take immediate and prompt actions to secure the release of the arrested person as permissible under the law. Hence, the requirement of communicating the grounds of arrest in writing is not only to the arrested person, but also to the friends, relatives or such other person as may be disclosed or nominated by the arrested person, so as to make the mandate of Article 22(1) of the Constitution meaningful and effective failing which, such arrest may be rendered illegal.”

Pankaj Bansal v. Union of India recognizes several fundamental truths about the arrest process:

  • An arrested person is in a vulnerable psychological state and cannot be expected to remember voluminous grounds read out orally.
  • The ability to seek bail effectively depends on having written grounds available for reference and consultation with legal counsel.
  • Oral communication creates evidentiary disputes that can be easily avoided through written documentation.
  • The statutory scheme itself mandates a written recording of reasons for arrest, making written communication the logical corollary.

Reference may please also be made to the cases of Mohammed Ajmal Mohammad Amir Kasab @ Abu Mujahid v. State of Maharashtra, (2012) 8 S.C.R. 295 (Paras 484–488).

Authored By
Vijay Pal Dalmia
, Advocate
Supreme Court of India & Delhi High Court

Email id: [email protected]
Mobile No.: +91 9810081079

LinkedIn: https://www.linkedin.com/in/vpdalmia/
Facebook: https://www.facebook.com/vpdalmia
X (Twitter): @vpdalmia