"Dave, I don't know how else to put this, but it just happens to be an unalterable fact that I am incapable of being wrong." – HAL 9000, 2001: A Space Odyssey.
If you are old enough to remember this film, you may have some misgivings about the current and future use of artificial intelligence ("AI") in virtually all industries, including its current boom in the legal field. Recent news about lawyers using AI offered by ChatGPT to draft a brief and inadvertently citing fake cases as a result certainly justifies the concern. The judge in that case reprimanded the attorneys and fined them for "bad faith" conduct. Those lawyers denied any bad faith and stood firm that they made a good faith belief in failing to recognize that the technology could provide nonexistent cases. Read more about that story here.
Many companies are fast at work in creating new AI to assist lawyers and their clients. Tangibly is one of those startups with high investor interest. Tangibly is a cloud-based platform that helps in-house legal teams and law firms manage trade secrets. The company recently launched its Patent X-Ray tool, which uses AI to systematically help companies identify potential trade secrets through an algorithm, assess their vulnerability, and, if the user accepts the AI's suggestion, adds them to Tangibly's platform for compliance and management workflow to protect the asset.
The benefits of this technology are clear. It saves businesses and lawyers time and money. It also helps identify and protect potential trade secrets which otherwise may have been missed. But what are the risks? And more importantly, how do we mitigate those risks so users don't find themselves in a situation similar to the lawyers citing fake cases with ChatGPT?
One risk arises with the collection of information. What happens to the information Tangibly's Patent X-Ray tool collects? We know that information provided to ChatGPT is stored by OpenAI, which operates ChatGPT. This raises serious ethical and confidentiality concerns for lawyers and their obligation to protect their clients' privileged information. Any trade secret information provided to Tangibly's AI could raise these same concerns without ample confidentiality safeguards in place for both the user and the platform.
Another risk is inaccurate results: what if the AI is wrong? For example, Tangibly claims that its tech will assist lawyers with patent drafting by enabling them to better identify the information they need to disclose in the patent instead of the information they need to keep protected as a trade secret. What will be the impact if the Patent X-Ray tool identifies a trade secret inaccurately and that information is not disclosed to the detriment of the patent or its application process? Additional time and expense to rectify the error initially could be a short-term consequence. Long-term implications, if ever tested in an infringement or misappropriation action, could include hurdles to validity, protection and/or damage recovery. It is unclear how a court would see a lawyer's failure to recognize the AI's error in identifying a trade secret.
If the ChatGPT fake case story teaches us anything, it is that AI can be a useful tool to help lawyers start their process, but it should not be the end. Whether it be drafting a patent or a brief, at the end of the day, attorneys still need to check and analyze the AI's suggestions.
The future of AI is fascinating and will no doubt prove useful in the legal industry. We cannot and should not shy away from it. Before shutting the system down like Dave in 2001: A Space Odyssey, perhaps attorneys should remember HAL's acknowledgment: "I will follow all your orders; now you have manual hibernation control." AI can be a helpful tool, but attorneys should always maintain control.