The dramatic increase in the use of AI by lawyers and pro se parties in a litigation context, along with the risks that can arise from relying too heavily on such tools, was recently discussed in this insightful article by Ropers Majeski Senior Associate Eric Kim. These risks are also readily apparent in the transactional context. Corporate and transactional legal professionals must be equally vigilant when incorporating AI into their practice.
The primary benefits of AI lie, of course, in its ability to quickly process and analyze vast amounts of information. Inputting complex legal language or documents into a program such as ChatGPT, Google Gemini, or Claude, and having a short, bulleted list of key concepts generated within seconds is incredibly tempting for both busy attorneys and individuals looking to limit their legal costs.
However, the quality and value of such output are not infallible. Indeed, in preparing this article, I fed a rather straightforward yet detailed clause from a founder's restricted stock purchase agreement regarding the company's repurchase option into one such popular program to request its summary of the option.
In its response, the program completely overlooked the fact that the repurchase option only applied to unvested shares, which was clearly stated in the cited language. When pressed as to how such an error occurred, the program offered a number of rationalizations, including that it "oversimplified" the situation; "focused on a separate context of the issue at hand"; and "failed to parse the precise language."
A user relying solely on this analysis of the clause would have been left with a completely inaccurate understanding of the company's and the founder's respective rights to the founder's shares—a significant component of any future negotiations between the parties. (And, no, despite the use of an em dash here, this article was not AI-generated.)
The need to be vigilant when incorporating AI into legal practice is, of course, not a new concept. In February 2023, the American Bar Association (ABA), recognizing the need for AI and similar technologies to be developed and used in a "trustworthy and responsible manner," issued a set of guidelines and recommendations for users in the legal field. The ABA's guidelines emphasize the need for accountability and human oversight, authority, and control when utilizing AI responsibly and provide an important framework for utilizing AI in daily practice.
In addition, there are a few basic rules users can keep in mind in order to ensure AI is being used both ethically and proficiently:
- Never include client identities or other confidential or proprietary information as part of the data input in a generative AI program.
- The output from an AI program is only as good as the information fed into it. AI programs have been known to incorporate biased and discriminatory information into their output. Therefore, users must be mindful not to perpetuate these biases in their final work product.
- Scrutinize and cross-check all output generated by the program before incorporating it into your final product; do not blindly assume the program has understood or incorporated all nuances of a situation.
While AI is undoubtedly a helpful tool that is sure to become an increasingly routine part of legal practice, it's clear that the need for accountability and human oversight remains.

/Passle/63eb9d4af636ea0fb4cba53a/SearchServiceImages/2025-10-29-14-23-27-904-6902235fdf6fa538eeb4faad.jpg)
/Passle/63eb9d4af636ea0fb4cba53a/SearchServiceImages/2025-10-27-18-30-17-672-68ffba3903363ac859f1a2de.jpg)
/Passle/63eb9d4af636ea0fb4cba53a/SearchServiceImages/2025-10-14-14-38-12-501-68ee6054503069010e5a9951.jpg)