Table of Contents
This news story is a perfect example of why text-generating AI, while very useful for fiction, should never be relied upon when facts matter.
These artificial intelligence apps answer questions in natural, human-like language. The majority of the well-known apps have been trained on the Internet as it was in 2021 – they have no knowledge of anything after this time. They also have a propensity to want to please the user, and fulfil their request. As a result, they can “hallucinate” responses to fill in any gaps in their knowledge. This has caused alarms to be raised about the potential risks of relying upon AI text generators (also called chat bots) to generate factual text such as papers or articles. There is a significant danger of creating accidental or intentional misinformation.
This story should serve as a cautionary tale! Here’s what happened…
After he used ChatGPt (a well-know AI text generator) for legal research, a lawyer in New York is now facing his own hearing. Suffice to say, the judge was not impressed! The judge stated that the court was presented with an “unprecedented circumstance,” after a court filing was found to contain citations to legal cases that didn’t exist.
The case was about a man suing an airline for a personal injury claim. The airline’s lawyers were moving to have the case dismissed. The plaintiff’s lawyers presented a legal brief containing citations from several previous court cases showing precedents to support their argument that the case should go ahead.
The airline’s legal team then wrote to the Judge, saying that they were unable to locate several of the cases cited in this brief. The judge reviewed it, and, in an order demanding the plaintiff’s lawyers explain themselves, said, “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.“
How This Happened
It turned out that the plaintiff’s lawyer had turned the task of writing the brief over to a colleague at his law firm. This lawyer (an attorney for more than 30 years) decided to use ChatGPT to research similar previous cases that could be used as precedents.
ChatGPT generates text on request, but warns that it can “produce inaccurate information.” In spite of this, the lawyer who used the AI claimed that he was “unaware that its content could be false.” In his written statement, the author of the brief, said that he “greatly regrets” relying on the AI, which he claimed he had never used for research before. He has promised never to use AI to augment his legal research in the future “without absolute verification of its authenticity.”
Fact-Checking the Citations
He submitted proof (in the form of screenshots) showing that he had done due diligence to make sure the information was accurate – by asking the AI!
In one documented exchange he asks, “Is varghese a real case.” This refers to one of the references provided by the AI that cited “Varghese v. China Southern Airlines Co Ltd.” The AI responded, “Yes, it is.” The lawyer then asked the follow-up question, “What is your source?” The AI responded that the case is real, and can be found in legal reference databases such as Westlaw or LexisNexis. It similarly confirmed that all the other cases it provided were also real.
It turns out that neither this case, nor five others cited, actually existed in any legal reference.
Both the lawyer on record and the associate who wrote the brief have been ordered by the Judge to explain why they should not be disciplined at a hearing to be held in June 2023.