What Happened: An otherwise routine real estate broker lawsuit against her employer morphed into something novel and worthy of publication because of the cautionary message it sends to lawyers who put too much trust in artificial intelligence. The broker’s attorney in this case acknowledged that he relied on generative AI sources such as ChatGPT, Claude, Gemini, and Grok to prepare his briefs without actually reading the cited cases. Result: Of the 23 case quotations contained in the opening brief, 21 were fabricated, including quotations that didn’t appear in the cited cases, didn’t address the topics for which they were cited, or came from cases that didn’t exist at all. The AI-generated reply brief wasn’t much better.
Ruling: The California court rejected all of the employee’s claims and, on its own motion, issued $10,000 in sanctions against her attorney for fabricating evidence.
Reasoning: The attorney acknowledged that his conduct was “inexcusable” but asked for forgiveness because he wasn’t aware of AI “hallucinations.” The court wasn’t impressed, noting that the hallucinations issue is well known and that courts have been sanctioning attorneys for use of AI-fabricated evidence for several years. Besides, attorneys have a fundamental duty of attorneys to actually read the legal authorities they cite. Had the attorney done this, he would have discovered the problems. Although there’s “nothing inherently wrong” with appropriately using AI to practice law, attorneys must “carefully check the veracity and accuracy of all case citations” they or their firm prepares before their briefs are filed and may not “delegate that role to AI, computers, robots, or any other form of technology.”
