Guardrails Without Governance: Why Minimum Compliance is Maximum Risk

Moral Machine Set List

Associates drowning in work, partners crushed by deadlines, everyone reaching for the nearest tool—authorized or not. When Rick v. Morty hits the federal docket, that’s not the machine failing the moral test. It’s the lawyers. The robots never took an oath—we did.

“Under Pressure” – Queen & David Bowie

Picture this: At the monthly meeting, a mid-size firm partner proudly declares, “We’ve addressed AI risks. We told everyone to be careful with ChatGPT and not to input client data.” Or worse: “Nobody is permitted to use AI.” Box checked/ignored. Problem solved.

Fast forward three months. An associate, drowning in work and facing a filing deadline, uses ChatGPT on their personal cell phone to draft a section of a brief. The partner, also crushed by deadlines and trusting the associate’s work, does a quick review and files it. The brief cites Rick v. Morty Holdings LLC, a perfect case that supports their argument beautifully. One problem: it doesn’t exist. ChatGPT invented it because it fit the fact pattern perfectly, and the associate had recently been chatting with GPT about that episode where Rick turns himself into a pickle to avoid family therapy, which, ironically, is exactly what this firm needs after the sanctions hearing.

Now the partner is facing sanctions, the associate is updating their résumé, the client is shopping for new counsel, and the firm’s insurance carrier is asking uncomfortable questions about AI governance policies. The managing partner’s “just don’t use it” approach isn’t looking so comprehensive anymore.

Here’s what that managing partner missed: RPC 1.1 doesn’t just require competence in law—it demands competence in the technology lawyers use to practice law. The duty of competence evolved when email arrived, expanded when e-discovery became standard, and now encompasses AI. Ignoring it isn’t cautious; it’s an ethical violation waiting to happen. And when your associate’s AI hallucination lands in federal court, “we told everyone not to use it” isn’t a defense—it’s an admission that you failed to govern technology your lawyers were inevitably going to use.

The Guardrail Illusion

The NYC Bar Opinion 2024-5 emphasizes “guardrails, not hard-and-fast restrictions.” It sounds reasonable, even progressive, and to some extent, it makes sense for a self-regulating body to push responsibility back onto attorneys and law firms. But here’s the dirty secret: guardrails without framework, process, or enforcement aren’t guardrails at all. They’re suggestions. And suggestions don’t hold up when you’re explaining fabricated cases to a federal judge.

In practice, guardrails at most firms mean a memo that boils down to “be careful” and maybe a lunch-and-learn where someone defines ChatGPT and warns you not to let it draft your motion to dismiss. Partners nod sagely (because they read about Mata v. Avianca), associates pretend to pay attention (because they’ve been using AI for years), and everyone goes back to business as usual, only now with a false sense of security.

This isn’t governance. It’s theater. And the courts are no longer interested in watching the show.

Courts Aren’t Waiting

Since June 2023, U.S. courts have recorded at least 95 incidents of AI-generated false citations, including 58 in 2025 alone.

The sanctions are piling up: $5,000 in Mata v. Avianca, $6,000 in Indiana, $31,100 in Kruse v. Karlen. Butler Snow attorneys were removed from an Alabama prison case for what the judge called “recklessness in the extreme.” Mike Lindell’s lawyers were each fined $3,000.

The pattern is clear: courts are not waiting for bar associations and law firms to figure this out. They are creating precedent one sanction at a time. And unlike ethics opinions, sanctions carry immediate consequences: humiliation, client loss, and money out of pocket. As California Special Master Michael Wilner wrote when he ordered $31,100 in fines: “Strong deterrence is needed to make sure that attorneys don’t succumb to this easy shortcut.”

What Firm Governance Might Look Like: Using NIST Framework as an Example

The National Institute of Standards and Technology didn’t create their AI Risk Management Framework for fun. They created it because every other industry learned this lesson the hard way: you can’t manage what you don’t measure, and you can’t measure what you don’t map.

NIST’s approach is elegantly simple: Map, Measure, Manage, and Govern. Let’s translate that from government-speak to law firm reality:

  • Map means knowing what AI tools your firm actually uses. Not what you think people use, but what they actually use. That includes the associate using Grammarly Premium (yes, that’s AI), the paralegal using Otter.ai for deposition transcripts, or Zoom.ai for meeting notes, and the partner whose kid set up Claude on their laptop. You can’t govern or train on what you don’t know exists.
  • Measure means understanding the risks each tool presents. Is it trained on public data that does not match the purpose for which the AI is being used? Does it retain inputs? Can it be audited? Most firms can’t answer these questions about their copy machines, let alone their AI tools.
  • Manage means implementing actual controls. Not suggestions and not guidelines. Technical API barriers won’t work when everyone has a phone in their pocket. Management in this context means implementation of firmwide approved AI programs that are supposed to be used by attorneys, training on those products in a meaningful and pragmatic way, and working towards the increased productivity that the panacea of AI has promised us.
  • Govern means someone owns this. Not “everyone’s responsible” (which means no one’s responsible), but a designated human being who is taking responsibility for implementing the tools, creating the process, and putting the “governance” in AI governance.

The Path Forward

Implementing real governance doesn’t require a PhD in computer science or a million-dollar budget. It requires three things every risk management process relies on: structure, ownership, and follow-through.

Start with NIST’s Map phase. Send a survey (anonymously if needed) asking what AI tools people actually use. The results will likely terrify you. Good. That’s where governance starts.

Assign ownership. Not to a committee, not to IT. To a specific attorney who understands both technology and ethics, with actual resources and authority.

Create real policies with real process and direction, not novels, but clear, specific rules about which tools are approved, what data can be used, and how to use them responsibly, backed by real training on those tools.

And finally, treat AI training the same way you treat any other professional skill. RPC 1.1 requires competence, and that includes competence in the technology you use. If AI is in the workflow—and it is—then AI training and ethical usage aren’t optional. They’re part of the duty of competence. Ignore that duty, and the next Rick v. Morty citation might be yours.

ABOUT AUTHOR
Chris D. Warren

Member, Scarinci Hollenbeck, LLC. Partnership and Business Litigation Attorney with Passion for the Nexus between Technology and Ethics in the Legal Profession.