Advertisement

US lawyers fined $5,000 after including fake case citations generated by ChatGPT

A judge said they had "abandoned their responsibilities" in not checking for inaccuracies.

It's something that's drilled into you from the first essay you write in school: Always check your sources. Yet, New York attorney Steven Schwartz relied on ChatGPT to find and review them for him — a decision that's led a judge to issue a $5,000 fine to him, his associate Peter LoDuca and their law firm Levidow, Levidow and Oberman, The Guardian reports. Schwartz used it for a case in which a man was suing Colombian airline Avianca alleging he was injured on a flight to New York City. In this case, ChatGPT produced six cases as precedent, such as "Martinez v. Delta Airlines" and "Miller v. United Airlines," that were either inaccurate or simply didn't exist.

In the decision to fine Schwartz and co., Judge P Kevin Castel explained, "Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings." Basically, you can use ChatGPT for your work but at least check its claims. In not doing so, the lawyers had "abandoned their responsibilities," including when they stood by the fake statements after the court questioned their legitimacy.

Examples of ChatGPT and other AI chatbots inaccuracies are widespread. Take the National Eating Disorder Association's chatbot that provided people recovering from eating disorders with dieting tips or ChatGPT wrongly accusing a law professor of sexual assault using a non-existent article from The Washington Post as proof.