The Case That Didn’t Exist: ‘Mercy vs. Mankind’ and AI’s Challenge to the Bar

The Case That Didn’t Exist: ‘Mercy vs. Mankind’ and AI’s Challenge to the Bar

In the halls of the Supreme Court of India, a new and invisible litigant has entered: the “hallucination.” During a recent hearing on political speeches, the bench was confronted not with a novel argument of law, but with a spectre. Justice B.V. Nagarathna flagged a cited precedent titled “Mercy vs. Mankind.” The problem was that the case does not exist. It was a phantom judgment, a digital fabrication invented by an Artificial Intelligence tool and pasted into a legal pleading without verification.

This incident is not a technological glitch; it is a jurisprudential crisis. Chief Justice of India Surya Kant termed the trend of lawyers drafting petitions with AI without verification as “alarmingly told” and “absolutely uncalled for”. As the Indian judiciary digitises, the legal profession faces a hurdle. The increment of AI-generated “hallucinations”plausible-sounding but entirely fictitious legal authorities which threatens to take away the sanctity of the courtroom, challengingthe Bar Council of India’s standards and the fundamental liability of an advocate.

The Phantom Precedent

The “Mercy vs. Mankind” incident is merely the tip of the iceberg. During the same session, CJI Kant noted that in Justice Dipankar Datta’s court, “not one but a series of such judgments were cited,” all of which were found to be fabricated. These are not merely errors of citation; they represent a fundamental ignorance of professional duty.

AI tools, particularly Large Language Models (LLMs), operate as predictive text engines, not truth engines. They predict the next likely word in a sequence based on vast training data. When asked for case law, they often “hallucinate”they invent citations that look authoritative, adopting the correct formatting, party names, and even judicial voice, but have no existence in reality.

In January 2026, the Bombay High Court in Deepak s/o Shivkumar Bahry vs Heart & Soul Entertainment Ltd. imposed costs of ₹50,000 on the advocate for submitting written submissions filled with “green-box tick-marks” and “bullet-point-marks” typical of AI output. The submission cited a non-existent case, Jyoti w/o Dinesh Tulsiani vs. Elegant Associates. Justice Milind Sathaye’s observation was: “This practice of dumping documents/submissions on the court… must be deprecated and nipped at bud”.

The Ethical Duty

This reliance on AI and submission of unverified authorities would constitute an ethical failure under the Advocates Act, 1961. While the Bar Council of India (BCI) has yet to codify specific rules for AI usage, the existing framework is clear regarding the “Duty to the Court.”

An advocate is an officer of the court. Submitting a fabricated judgment, even unintentionally, would constitute professional misconduct. It violates the core idea of not misleading the judiciary. From a comparative perspective, the American Bar Association (ABA) Model Rulesemphasize “Competence” (Rule 1.1) and “Candor to the Tribunal” (Rule 3.3).

In India, the duty of competence is implicit. When a lawyer submits a petition drafted by AI without verification, they are effectively outsourcing their intellect to a probabilistic machine that cannot be held accountable. As Justice Nagarathna observed, this practice creates an “additional burden on the judges,”who are now in a situation to scrutinise the authenticity of every paragraph to ensure it actually exists in the cited judgment. This inversion of dutywhere the bench has to verify the bar’s basic researchis unsustainable.

The BCI Rules on professional standards require advocates to act with dignity and integrity. There is no dignity in a submission that invents law. The Bombay High Court in the KMG Wires tax assessment case quashed an order precisely because the Assessing Officer relied on “completely non-existent” case laws generated by AI, calling it a violation of principles of natural justice. If a tax officer can be condemned for this, an advocate, whose primary currency is legal authority, should face a higher threshold of liability.

Procedural Safeguards

The intrusion of AI hallucinations poses a direct challenge to the procedural safeguards enshrined in Indian law, particularly regarding the admissibility and reliability of information.

The judiciary is reacting with defensive measures. The Kerala High Court in July, 2025 issued a “Policy Regarding Use of Artificial Intelligence Tools in District Judiciary,” mandating that under no circumstances should AI be used as a substitute for legal reasoning.

Similarly, in the United States, in the case of Mata v. Avianca,the judge penalised lawyers $5,000 for submitting a brief with fake citations generated by ChatGPT. The judge noted that while using AI is not inherently improper, “existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings”. Indian courts are echoing this sentiment. Justice Surya Kant has remarked that while technology should guide, “the human [must] govern”.

The Future of ‘Due Diligence’: A Framework for AI-Assisted Advocacy

The legal fraternity cannot remain ignorant to this situation. We need a new framework for “AI-Assisted Advocacy” that integrates technology without compromising the “liability of the advocate.” Following are few suggestions in regard of the same-

1. Every citation generated by an LLM must be cross-referenced against authoritative databases like the Supreme Court Reports (SCR), SCC Online, or Manupatra.

2. Following the lead of certain US District Courts, Indian courts should consider Practice Directions requiring advocates to certify that any AI-generated portion of a filing has been verified by a human. This forces the lawyer to pause and acknowledge their gatekeeping role.

3. The use of AI to draft substantive legal arguments without human restructuring should be viewed as professional negligence.

4. The Bar Council should clarify that the plea of “technology error” is not a defence against professional misconduct. If a fake case is cited, the signing advocate must be solely liable.

Conclusion

The era of “phantom precedents” is here. The Mercy vs. Mankind citation may sound almost poetica struggle between compassion and humanitybut in a court of law, it is a hazard.

The Bar Council of India should move beyond general circulars and integrate specific AI-literacy and ethics modules into the legal curriculum and continuing legal education. The trust in the legal system relies on the shared understanding that the law we cite is real. If advocates abdicate this duty to AI, they would not just fail their clients; they would fail the Constitution.

As Justice Surya Kant remarked, “justice is a profoundly human effort”. It is time for the Bar to ensure it remains one.


Author Bio- Pulkit Verma
2nd Year, B.A. LL.B. (Hons.)
Five-year Integrated Law Course (ILC)
Faculty of Law, University of Delhi

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *