Bold Business Logo

When AI Chatbots Go Wrong: The Courtroom Edition

Since November of last year, there has been quite the buzz about generative AI and products like ChatGPT. Its ability to rapidly scan information databases, presumably from the Internet, can make life more efficient. Many students as well as professionals have thus begun to leverage ChatGPT and other similar products to their advantage. Some even say that these generative AI tools could revolutionize search and may become even as ubiquitous computer use. But there are some notable pitfalls of which any user should be aware. And these can be nicely demonstrated in the legal uses of ChatGPT as a recent case nicely shows.

someone chaining up a generative AI
Generative AI is great for content, but not so great for accurate case law citations.

(Bold warned you–even though an AI chatbot can pass the bar exam, you should still get a human lawyer!)

In May, an attorney representing a plaintiff in a personal injury lawsuit was routinely preparing a legal brief. For the most part, everything was straightforward, and the brief was filed in court. But in the process, it became apparent the attorney had used ChatGPT for a little assistance in the process. Unfortunately, several cases cited within the legal brief didn’t exist. In other words, they were fabricated by generative AI, which subsequently landed the attorney in hot water. Given that many law firms are now pursuing legal uses of ChatGPT in their practices, the discovery received significant attention. And of course, the attorney himself may be facing serious sanctions as a result.

“You are ultimately responsible for the representations you make. It’s your bar card.” – Daniel Martin Katz, Law Professor at Chicago-Kent College of Law

Details of the Case

One might think that the lawsuit relative to this case was highly complex to warrant the use of generative AI. But in reality, the legal complaint was pretty straightforward. The plaintiff, represented by attorney Steven A. Schwartz, alleged a metal serving cart had injured his leg in flight. The defendant, Aviance Airlines, refuted the claim and tried to have the case thrown out due to statute of limitations. The brief prepared by Schwartz argued against both of Avianca’s statements by citing several precedent case. But when the attorneys for Avianca attempted to find these precedents, they were non-existent. In fact, six were never found, which alerted them to the fact that legal uses of ChatGPT might be responsible.

When the information was presented to the judge in the case, Mr. Schwartz was asked about his preparations. It was then that he acknowledged that he used generative AI in preparing the legal brief. But he also stated he had no idea that ChatGPT and other AI platforms might make up legal cases. He was under the impression that it was simply a search tool, and therefore, legal uses of ChatGPT were reasonable. However, at the same time, Mr. Schwartz nor his team verified the cases were real or even supported their position. Understanding this, the judge in the case is now considering sanctions against Mr. Schwartz as well as his firm. The judge’s decision may have long-reaching repercussions for legal firms across the nation as a result.

(Dig into a comparative breakdown of the various AI chatbot programs, courtesy of Bold.)

someone using generative AI like a chump
If you use generative AI to do your legal research, you deserve that contempt of court charge.

“Blindly relying on generative AI to give you the text you use to provide services to your client is not going to pass muster.” – Andrew Perlman, Dean of Suffolk University Law School

Generative AI and Legal Ethics

While the attorney in this specific case pleads ignorance, this doesn’t necessarily fly when it comes to legal ethics. The American Bar Association does not specifically have guidelines for the use of generative AI. However, it does have a number of relevant ethical principles to help with legal uses of ChatGPT and other AI supports. The most notable one requires that attorneys provide competent representation for their clients. Given that the legal brief contained so many falsities, it’s hard to argue the legal efforts in this situation were competent. Because fact-checking the AI texts weren’t performed, the client’s position in this case was negatively affected.

Competency is not the only core ethical principle that is relevant to the current case involving generative AI use. The Bar also requires members to be up-to-date on the latest technologies and their use. This means not only understanding what advantages they might offer but also their pitfalls as well. It is well recognized that ChatGPT is prone to biases and “hallucinations” of fake data. The Bar also requires lawyers to supervise subordinate lawyers and non-lawyers when overseeing a case. This includes oversight of non-human tools and supports as well. Finally, attorneys are to protect client confidentiality. When legal uses of ChatGPT uses client data, there are risks here as well since the data is being provided to specific AI companies. In considering the Bar’s Code of Ethics, it would appear Mr. Schwartz’s actions fell short on many levels.

Judicial Guidance Moving Forward

legal uses of ChatGPT include research and contempt of court charges
The legal uses of ChatGPT should begin and end at limited research outside of actual case law.

The current climate pertaining to legal uses of ChatGPT is highly dynamic. Many firms have already developed policies and procedures for using generative AI in their practices. In another instance, a start-up called DoNotPay launched a platform to help individuals get out of paying tickets. Their platform involved wearing smart glasses that fed users text to say in court in real-time. While the platform has not yet been used, these developments and Mr. Schwart’s situation highlights the complexities. In order for attorneys to adopt best legal AI practices in the legal uses of ChatGPT, much is to be learned. But for now, trial-and-errors seems to be the preferred route.

In light of Mr. Schwart’s situation, some judges are being proactive in their courtroom approach with generative AI. In the Northern District of Texas, a federal judge now requires attorneys to be transparent about its use. The judge stated that no portion of any legal filing can be drafted by generative AI. If it is used, however, lawyers must flag the AI language in the text so it may be checked for accuracy. While this is one approach, long-term such practices are not likely feasible. If indeed legal uses of ChatGPT become prevalent, the efficiencies gained might be eliminated by fact-checking generative AI texts. Certainly, safeguards and ethical practices are needed with legal uses of ChatGPT. But avoiding the technology altogether is not likely the best answer.

 

Targeted medicine and treatments are the future–read all about in this Bold story.

How can we help?