A federal judge sanctioned a Cherry Hill attorney for filing a brief with AI hallucinations, again
U.S. District Judge Kai N. Scott sanctioned Raja Rajan $5,000 last week for submitting a court filing with AI hallucinations. Scott sanctioned Rajan $2,500 previously for errors in AI-written briefs.
Raja Rajan doesn’t remember if he used Claude, ChatGPT, or Grok to write the memo that got him in hot water with a federal judge, again.
The Cherry Hill attorney was pressed on time so he says he took a “shortcut” when he wrote a brief that was filed in federal court on Feb. 20.
He wrote it using an AI chatbot, Rajan said, and asked a different bot to verify the citations were accurate. With the technology’s blessing, he filed the motion for a business dispute in which he represented his brother.
Opposing counsel noticed six false citations, and notified the court. Rajan tried to review the cases the chatbot cited, but couldn’t find one. It was an AI hallucination.
“So that’s when I knew there’s a problem,” Rajan said.
U.S. District Judge Kai N. Scott sanctioned Rajan $5,000 last week. Just over a year ago, Scott ordered him to pay $2,500 for relying on AI to write two motions, and sent him to continuing legal education classes.
It “makes no sense,” the judge said, that an attorney who has practiced for nearly 40 years violated such a basic tenet of professional conduct.
“Any first-semester, first-year law student would know that a fundamental rule of lawyering — during litigation or otherwise — is to ensure that any authority to which a lawyer cites does indeed support the proposition for which he cites it," Scott wrote.
The judge from Philly’s federal court said she would refer Rajan, who is licensed to practice law in Pennsylvania, to the state’s disciplinary board if he were to file a brief with AI hallucinations a third time.
Rajan’s sanctions are part of a wave of disciplinary actions by courts grappling with the proliferation of AI in the legal profession. The temptation to use the technology for legal research and write briefs has overtaken solo practitioners, such as Rajan, and the Big Law’s top firms alike.
» READ MORE: An immigrant sued ICE and won. A judge said the government must pay back his $40,000 in legal fees.
Prestigious Wall Street law firm Sullivan & Cromwell apologized last week for having submitted a court filing with AI hallucinations. The firm has more than 900 attorneys, and its partners charge over $2,000 an hour.
But courts have been unsure how to approach the new technology, and what to require from attorneys.
Earlier this year the Third Circuit Court of Appeals addressed the issue for the first time in a split decision emblematic of the debate around who is responsible for AI-generated errors — the lawyer or the technology?
A panel of three judges reprimanded an attorney for an AI-generated filing that included seven citations “riddled with factual and legal inaccuracies” and one that “simply did not exist,” according to the court’s opinion.
But two of the judges stopped short of imposing more significant sanctions, in part, because the Third Circuit “has not yet had the opportunity to speak on the issue and emphasize that, when using AI, litigants must still strictly adhere to all rules of professional conduct,” Judge Cindy K. Chung wrote.
Senior Judge Jane R. Roth would have gone further, she said in a separate opinion. Attorneys should know that it is their responsibility to verify information they submit, the judge said, and courts don’t need to issue reminders for every new circumstance.
“Punishing an attorney for failure to verify information obtained from AI is consistent with the standard to which attorneys historically have been held,” Roth said.
For now, Philadelphia’s federal court instructs attorneys to disclose their use of AI and certify that every citation has been fact-checked.
Rajan said he hasn’t litigated cases in years, but picked up the case to help his brother and former business partner, who is accused of deceiving an investor to secure a million-dollar loan.
In addition to the two AI-related sanctions, Scott ordered Rajan in February to pay $78,000 in legal fees to the opposing counsel for filing absurd counterclaims, acting in bad faith, and making disingenuous claims.
He is appealing that decision, but says Scott isn’t wrong on the AI hallucinations.
“It’s definitely my fault,” Rajan said. “I definitely, definitely took a shortcut, and I got punished for it.”
The attorney predicts that the problem with AI is only going to get worse, and sanctions like the ones he has been facing won’t deter everyone. AI has been an equalizer, he said, allowing solo practitioners to do time-consuming legal research that large firm pays paralegals, interns, and younger associates to do.
Incorporating AI into legal practice can be useful, for example to do an initial review of large document dumps, but it doesn’t erase ethics and professional standards, said Ezra Wohlgelernter, a personal injury attorney and the chancellor of the Philadelphia Bar Association.
“The bottom line: The rules haven’t changed, the tools of our practice have,” Wohlgelernter said.
In her memo explaining her sanctions against Rajan, Scott reminds attorneys that “law is fundamentally a human enterprise.”
But the judge also issued a clear a warning.
“Do not get so comfortable that you forget your duties to your clients, courts, or society writ large, because if you do, it will be your name and license on the line,” Scott wrote, “not ChatGPT’s.”
