Quantcast
Channel: Lawyer Blog & News | Clio
Viewing all articles
Browse latest Browse all 203

AI Hallucinations: a Costly Mistake for Lawyers

$
0
0

We’ve written extensively on how lawyers can use AI responsibly—and, as yet another court decision is released addressing lawyers’ use of AI, it’s never been more important to understand the risks and limitations of AI use. 

Most recently, lawyers from Morgan & Morgan—the largest personal injury law firm in the United States—were sanctioned for submitting court filings that contained cases “hallucinated” by artificial intelligence. 

Below, we’ll explain what hallucinations are, review the court’s decision to sanction Morgan & Morgan’s lawyers, and provide tips for mitigating risk when working with AI for legal research.

No matter where you’re at in your legal AI journey, Clio Duo, our proprietary AI technology, can help you run your practice more efficiently while adhering to the highest security standards. Learn how it works

What are AI hallucinations? 

AI hallucinations occur when a large language model (LLM) generates false or misleading information that, on its face, appears plausible. In the context of legal research, this means that AI can—and does—produce case law, statutes, or legal arguments that don’t exist. And, if lawyers rely on AI-generated research without independently verifying that the research is accurate, they risk misinterpreting the law and facing serious professional and ethical consequences—like the sanctions imposed on Morgan & Morgan’s lawyers. 

What happened with Morgan & Morgan’s AI usage? 

Morgan & Morgan were representing a plaintiff in a products liability case. While drafting the case motions (namely, motions in limine), one of the lawyers involved used the firm’s in-house AI platform to locate cases setting forth requirements for motions in limine. The AI platform generated citations—but, it turns out, they weren’t real cases. Further, the drafting lawyer didn’t verify the accuracy of the cases before signing the filings along with two other lawyers. 

Image of scales of justice

How did the court respond to the hallucinations? 

The court issued a show cause order on why the plaintiffs’ lawyers shouldn’t be sanctioned or face disciplinary action. The plaintiffs’ lawyers admitted that the cases they had cited didn’t exist and that they had relied on AI without taking steps to verify the accuracy of the information in the motions. 

At the time the order was released, the plaintiffs’ lawyers had already withdrawn their motions in limine, been “honest and forthcoming” about their AI use, paid opposing counsel’s fees for defending the motions in limine, and implemented internal firm policies to prevent such errors from occurring in the future. 

The court noted its appreciation for the steps the plaintiffs’ lawyers had taken, and recommended that lawyers in similar situations should follow the same steps (at a minimum) to remediate situations involving AI hallucinations prior to the issuance of sanctions in the future. 

What does the law say about sanctions and AI hallucinations? 

Under s. 11 of the Federal Rules of Civil Procedure, by presenting a pleading or motion to the court (whether by signing, filing, submitting, or advocating for it), a lawyer certifies that, to the best of their knowledge, the legal contentions in the pleading or motion are supported by existing law or a nonfrivolous argument. Failing to comply with this requirement can result in sanctions. 

When determining whether sanctions are warranted under Rule 11, the court will evaluate the situation with “objective reasonableness.” In other words, if a reasonable inquiry finds that the legal contentions in the pleading or motion are not supported by existing law, then a violation of Rule 11 has occurred. And, where a violation of Rule 11 has occurred, the court will impose an appropriate sanction. 

How the court applied the law to the case

The court determined that the lawyers’ conduct violated Rule 11. Notably, the lawyers didn’t dispute that they had cited AI-hallucinated cases in the brief. Furthermore, the court acknowledged that signing a legal document indicates that the lawyer read the document and conducted a reasonable inquiry into the existing law. Despite two of the three lawyers not having been involved in the drafting process, their signatures (with failure to further investigate the content of the motions) amounted to an improper delegation of their duty under Rule 11. 

Having found that the lawyers violated Rule 11, the court imposed sanctions against all three lawyers. The drafting lawyer was fined $3,000 and had his temporary admission revoked (he was licensed in another state, and granted permission to practice in state for the case in question) while the other two lawyers were fined $1,000 each. 

Final thoughts on the Morgan & Morgan AI case

Ultimately, the court said it best: “The key takeaway for attorneys is simple: Make a reasonable inquiry into the law before signing (or giving another person permission to sign) a document, as required by Rule 11.” While the court acknowledged that AI is a powerful tool for legal professionals and can reduce the time spent on legal research and drafting significantly, at the end of the day, lawyers still need to confirm the accuracy of their work products. 

Ideally, lawyers will not run into similar issues with AI hallucinations in the future—but, when it happens, it’s best to take immediate action. The court acknowledged as much, expressing its appreciation for the steps taken by the plaintiffs’ lawyers and recommending that, at a minimum, lawyers in similar situations should: 

  • Withdraw their motions;
  • Be honest and forthcoming about their AI use;
  • Pay any fees incurred by opposing counsel in responding to the motions; and
  • Take internal steps to prevent such errors from occurring in the future. 

And don’t forget: Legal-specific AI tools can play an important role in helping your firm adopt AI responsibly. For example, Clio Duo, our proprietary AI technology, can help enhance efficiency while safeguarding client data. Book your free demo today! 


Viewing all articles
Browse latest Browse all 203

Trending Articles