ChatGPT Said It, So It Must Be Law… Right?
Okay, so here’s a wild story from Utah that’s been making headlines: an attorney named Richard Bednar got sanctioned by the Utah Court of Appeals because he submitted a legal brief with totally fake citations—made by ChatGPT.
According to The Guardian, one of the cases listed was something called Royer v. Nelson, which sounded legit… except it doesn’t exist anywhere. Not in any legal database, not in any law book. Just made up.
Once the court figured it out, Bednar admitted the citations were bogus and apologized. According to court documents (and ABC4), he and his lawyer owned up to the fact that the legal sources in the petition were fake and said they came from ChatGPT.
Here’s where it gets a bit messy: Bednar blamed the issue on an “unlicensed law clerk” who apparently wrote the brief. He also admitted he didn’t double-check it before submitting it to the court. That law clerk, who had just graduated from law school, was later let go from the firm. Bednar even offered to cover any legal fees caused by the whole mess to try to make things right.
The Utah Court of Appeals made a statement about it too. They basically said, hey, we get that AI tools like ChatGPT are going to be used more and more in legal work—but it’s still your job as a lawyer to fact-check everything before turning it in. In this case, Bednar seriously dropped the ball by submitting something with made-up legal precedent.
Moral of the story? AI can be helpful, but if you’re using it for something important—especially legal stuff—you better double-check every word.
Read more here: https://www.theguardian.com/us-news/2025/may/31/utah-lawyer-chatgpt-ai-court-brief