During a court hearing, a New York attorney expressed his lack of intention to deceive anyone when he submitted a legal brief containing fabricated legal precedents generated by ChatGPT. Steven Schwartz, who faces potential penalties for his actions, pleaded for leniency from US District Judge P. Kevin Castel on Thursday. The attorney argued that he was unaware that the artificial intelligence tool he used could generate fake case citations and court opinions.
Schwartz admitted his failure to ensure the authenticity of these cases, stating, "There were numerous steps I should have taken to verify the accuracy of these cases, and I failed miserably in doing so." The emergence of generative artificial intelligence tools like ChatGPT has the potential to disrupt white-collar professions, transforming how law firms, financial institutions, and others operate. However, such technology carries inherent risks, including the temptation to rely too heavily on it.
ChatGPT, developed by the nonprofit organization OpenAI, is a chatbot capable of engaging in human-like conversations and accessing extensive information from the internet. Nevertheless, it acknowledges its susceptibility to generating hallucinations and providing inaccurate data. Schwartz confessed this week that ChatGPT invented six cases he referenced in a brief filed against Avianca Airlines. The attorney informed the judge in a federal court in Manhattan that he could never have anticipated ChatGPT fabricating legal cases.
The case revolved around Schwartz's client, who claimed to have suffered "severe personal injuries" when an Avianca employee allegedly struck his left knee with a metal serving cart during a 2019 flight from El Salvador to New York. The airline sought to dismiss the lawsuit, arguing that it was filed after the expiration of the statute of limitations. Schwartz turned to ChatGPT for research on the statute of limitations matter. Even after Avianca's legal team contested the existence of the cases cited, Schwartz continued to rely on the AI tool.
Judge Castel emphasized that the outcome of the case hinged on the actions of Schwartz and his colleague, Peter LoDuca, from their law firm, after discovering the fraudulent citations. Had they ceased their reliance on the fabricated evidence, the situation may not have escalated to its current state, the judge remarked. Castel meticulously reviewed Schwartz's flawed brief, asking whether he had attempted to verify the cases through legal research databases, law library resources, or even a simple Google search. Each time, Schwartz admitted that he had not pursued these avenues.
The judge also inquired whether Schwartz found anything suspicious about one of the main fraudulent cases referenced in the brief—the fictitious "Varghese v. China South Airlines Co."—noting that it contained nonsensical information.
"Can we agree that this is legal gibberish?" asked Castel. Schwartz responded by expressing his disbelief that ChatGPT could fabricate an entire case and admitted that he had never considered such a possibility until Judge Castel's May 4 order prompted him to explain his actions. "I continued to be deceived by ChatGPT," Schwartz lamented, acknowledging the embarrassment he felt.
Comments