There's been much published on lawyers and AI. My colleagues recently offered insights on how AI models can be applied in litigation and transactional legal work, as well as how AI regulation has already begun.
Usually, AI use seems to involve a lawyer somewhere who asked ChatGPT to prepare a brief for submission to a court, with 'hallucinated' case citations, and with no oversight by the lawyer. Inevitably, the court finds out, and the lawyer is admonished. These seem to pop up once every six months or so.
This type of AI usage seems unappealing to strategic and careful lawyers. Most successful attorneys take pride in their analytical capabilities and strong writing skills. Nevertheless, the emergence of AI, and its language learning models (LLMs) in particular, and how they can be applied in everyday life, including at work as a lawyer, continues to expand.
One method we have not seen much of yet, but have been exploring thoroughly, is using AI as a thought partner. In this scenario, the user asks AI to consider a problem, issue, or topic that the user self-generates. The AI then responds however it responds. Perhaps it provides various perspectives on the problem or gives an overview of the competing interests at play. The user then uses that response to further inquire and to refine and focus their own thinking on the topic, with independent non-AI research.
This method is not a typical use of AI. Generally, most users use AI like an extended Google. "What is the tallest building in NYC?" "Where can I get a new computer near me?" And so on.
This is all fine. But it isn't the most interesting use of AI. I believe it limits the technology's potential altogether. Consider a different application.
For example, if I have a case where a worker alleges that he was given a particular item of equipment ("x") for use in his construction work, but a different item ("y") would have been safer, I would ask AI to compare the two and determine whether one could be safer than the other.
Note here that I do not care very much about the answer. I am not asking AI to confirm the conclusion. I am asking for different reasons. I am requesting a framework on the topic that I do not currently have. I do not independently know whether x is safer than y, or vice versa, and I am not an expert on x or y.
The core benefit here is that, even if AI's conclusion that y is safer than x is later disproven, it has still furnished a framework for comparison—a starting point for analysis I simply didn't have before. I can also review and verify the supporting information it provides. More importantly, I can now have a more focused and productive conversation with an expert, because I'm coming in with a clearer understanding of the topic, allowing me to gain an edge on my adversaries and provide value to the client that others are not offering.
In this way, AI has helped me think about the problem, but has not done the thinking for me. As the end user, I will always have more work to do. I will still need to think about the problem thoroughly. But AI has narrowed, and perhaps clarified, my focus. In this way, it adds significant value to my work.
Ultimately, AI doesn't think for me; it acts as a lens to clarify and target my own thinking. That seems immensely helpful and suggests limitless potential in the legal field and beyond.

/Passle/63eb9d4af636ea0fb4cba53a/SearchServiceImages/2025-11-21-15-53-12-204-69208ae8a01a72ffd39eeb9e.jpg)
/Passle/63eb9d4af636ea0fb4cba53a/SearchServiceImages/2025-10-29-14-23-27-904-6902235fdf6fa538eeb4faad.jpg)
/Passle/63eb9d4af636ea0fb4cba53a/SearchServiceImages/2025-10-23-19-11-28-166-68fa7de0f03bb35bf6841c7c.jpg)