This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

News & Insights

| 1 minute read

The Rise of AI-Assisted Litigation

As AI programs such as ChatGPT, Gemini, and Perplexity Pro become more accessible to the public, the use of these tools in the legal sphere has increased dramatically. Both pro se litigants and attorneys are turning to ChatGPT and other AI tools to perform a variety of legal tasks, including summarizing statutes and case law, drafting motions and discovery, and negotiating with an opposing party. While the use of AI can be seen as a time-saving tool to help litigants practice law more efficiently, these tools must be used cautiously because of their tendency to hallucinate.

Generative AI programs are not actual thinking machines despite AI standing for the term "Artificial Intelligence". They are algorithmically powered applications that have been fed millions of words scraped off social media, databases, and websites such as Reddit and Wikipedia. Using these input models, AI predicts the most applicable and appropriate words to generate in response to queries and prompts. Therefore, while AI can be accurate and helpful, it can also generate hallucinations that appear correct despite being entirely made up.

When it comes to legal research and writing, AI tools can be both useful and damaging. When an AI tool is fed case law, statutes, and legal research, it can accurately summarize complex legal jargon and identify specific relevant language. However, AI can also hallucinate false rules and case law, despite sounding accurate and authoritative. Users must always verify any references made by AI to determine whether the cited case law or statute accurately portrays any legal argument. A litigant's failure to check the references in their filed pleadings can result in sanctions when the courts discover such citations. Recently, in the California Court of Appeals, Second Appellate District, an attorney was sanctioned $10,000 for filing an appeal in which 21 of the 23 quotes from cited cases were hallucinated by generative AI. (Noland v. Lande of the Free, L.P., B331918

Artificial Intelligence is a powerful tool that can help level the playing field for pro se plaintiffs with little legal experience. It is also a helpful way to research and brainstorm new legal strategies and arguments. But it is ultimately just a tool, and litigants must be careful to avoid the pitfalls that could ultimately result in sanctions by the courts.

For litigants, AI hallucinations can lead to pricey penalties. Jack Owoc, a colorful Florida-based energy drink mogul who lost a false advertising case to the tune of $311 million and is now representing himself, was recently sanctioned for filing a court motion with 11 AI-hallucinated citations referencing court cases that do not exist.

Tags

insight, professional-liability, business-and-commercial-litigation, menlo park, ai, legal ai, litigation, trials