Artificial intelligence – or just AI - has been making its way into everyday life for quite some time now. However, since the end of 2022, it’s been all the buzz. AI has the potential to radically transform many industries and make professionals more productive.
This is particularly true in law, where much of the work involves analyzing documents and creating information, which is something AI is great at. As a result, law firms and attorneys who fail to learn how to use AI in their practices run the risk of being left behind. As many have said – AI is not going to replace lawyers; lawyers using AI will replace lawyers who don’t. That said, it is critical for lawyers and other legal professionals to understand the risks associated with using AI and how to use it correctly.
To illustrate this point, it's instructive to look at what happened to a New York lawyer who relied on AI earlier this year….
Personal Injury Attorney Learns the Hard Way that Not All AI Legal Research Tools are Reliable
Personal injury attorney Steven A. Schwartz of the firm Levidow, Levidow & Oberman recently learned first-hand why using just any AI for legal research can be risky. His client, Roberto Mata, filed suit against airline Avianca for an injury he sustained when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York in 2019.
After the airline’s attorneys requested that the court dismiss the suit based on the statute of limitations, Schwartz used AI in the form of ChatGPT to perform legal research and provided the court a 10-page brief citing over half a dozen relevant court decisions which addressed a learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations.”
However, when the defendant’s attorneys went to research these cases, they were nowhere to be found. U.S. District Judge P. Kevin Castel couldn’t find them either. Why? Because none of the cases actually existed. The court ultimately fined the lawyer $5,000.
This sample memo from Alexi illustrates the information the lawyers could have received from a domain-specific legal research ai tool such as Alexi.
The Right Way for Lawyers to Use AI for Research
In hindsight, the wrong way to use AI for legal research seems pretty clear, but what is the right way? When using AI for legal research, adhere to these tips:
1. Use Domain-Specific AI Legal Research Tools
Don’t rely on generic AI for legal research, such as ChatGPT, OpenAI, ChatSonic, or Bing AI. While these tools might be good at giving everyday, general information, such as a recipe for banana bread, writing a haiku about nature, or even composing a song, they are unreliable in professional arenas. If you need to perform legal research, you need specific AI legal research tools like Alexi.
2. Double Check Your Citations
If you use AI for legal research, you may need to double-check it with another source. While Schwart double-checked his work, he did it with ChapGPT, the original source which provided him with wrong information to begin with. Double-checking your citations against another source will help ensure you don’t find yourself in a similar predicament.
3. Avoid AI that Gives Opinions
You are the attorney; no form of AI or machine should be providing legal opinions. The AI you use should provide you with accurate, reliable, and relevant case law, but it should stop short of providing an opinion. AI doesn't practice law; you do.
Performing legal research can be expensive and time-consuming. However, with the help of reliable AI legal research programs like Alexi, you can benefit from AI and have confidence that your information is reliable.