The ‘Reasonable Degree’ Loophole: Why AI Competence Can’t Be Optional

Moral Machine Setlist


While the Bar debates what “reasonable” even means, shadow AI is already running the show underground. At least the soundtrack is clear: Bob Dylan — Subterranean Homesick Blues.

The Competence Test You’re Already Failing

Quick quiz: What’s the difference between GPT, Claude, Harvey, and Lexis AI? What’s a token limit? What’s temperature in AI settings? What’s the difference between a deterministic and probabilistic output? How does retrieval augmented generation work (RAG), and what system do all attorneys (I shouldn’t say “all attorneys” – I’m sure there must be a handful of attorneys out there that still don’t use a computer) use daily that employs this method of AI?

If you can’t answer these, do you understand AI to a “reasonable degree”? You might be violating Rule 1.1 every time you open an AI tool. Yet according to NYC Bar Opinion 2024-5, you might still meet the “reasonable degree” standard. That “maybe, yes” is potentially a problem.

The Great Competence Dodge

The opinion states lawyers must understand “to a reasonable degree” how AI works. That’s the kind of language that sounds defensible until you’re in a deposition trying to explain why you didn’t know your AI tool was fed opposing counsel’s publicly filed briefs in drafting your opposition that read likes a reply, or why you didn’t realize your creative writing AI was designed to hallucinate (on purpose), and you “just dropped your brief into it for spellchecks” and it came out looking like a Picasso painting of legal reasoning.

Consider the case of Mata v. Avianca, where attorneys submitted ChatGPT-fabricated cases to federal court. They didn’t understand that large language models can generate plausible-sounding but entirely fictional citations. Result? Sanctions, public humiliation, and a new cautionary tale for ethics CLEs nationwide. These weren’t tech-averse luddites, they were practicing attorneys who thought “reasonable understanding” meant knowing how to type prompts.

Let’s put this in context. We don’t tell lawyers they need to understand securities regulations to a “reasonable degree” before handling an IPO. We don’t say understanding of criminal procedure to a “reasonable degree” is sufficient for a capital case. We demand competence, full stop.

Yet for AI, technology that can fabricate evidence, encode illegal bias, waive privilege, and generate entire legal strategies based on statistical patterns rather than legal reasoning… we get “reasonable degree.” It’s like saying surgeons need to understand anatomy “reasonably well.” Good enough for government work, terrifying for actual practice.

The AIGP Competency Framework: What Real Standards Could Look Like

While the NYC Bar debates “reasonable degrees” of competence, other industries have already established concrete standards. The AI Governance Professional certification, for example, defines specific competencies that would satisfy Rule 1.1’s requirements. Not every attorney needs to master these technical details (or really, even understand the questions), but firms (or businesses) without any AI governance frameworks risk a dangerous scenario: attorneys with limited technical understanding deploying tools unsuited for legal work.

Technical Literacy: Not coding, but understanding. Can you explain the difference between supervised and unsupervised learning? Do you know why large language models hallucinate? Can you identify when a tool is using pattern matching versus rule-based reasoning? These aren’t academic exercises, they determine whether you can identify when AI is appropriate for contract analysis versus when it should stay far away from your jury selection strategy. Without this knowledge, you are potentially violating Rule 1.1 every time you touch an AI tool.

Risk Assessment: Understanding isn’t theoretical. Can you identify data leakage risks? Do you know what transfer learning (when an AI applies knowledge from one domain to another) means for confidentiality? Can you spot when a model or tool might be encoding protected class discrimination? This is the difference between competent representation under Rule 1.1 and getting bench slapped in federal court while your client shops for new counsel and a malpractice attorney.

Governance Structures: Knowing how AI works includes knowing how to govern it, or having someone who knows how, which is a Rule 5.1 requirement. What’s your validation process? How do you audit AI decisions? What’s the qualitative process to check the AI didn’t make an error? If you can’t answer these questions, or have someone in your firm who can, how do you propose you’ve fulfilled a “reasonable degree” of competency under RPC 1.1 to supervise AI use in your practice?

Another potential NIST Bridge: From Theory to Practice

The legal profession isn’t the first to grapple with AI governance. NIST’s AI Risk Management Framework has already created a roadmap that works across industries. The approach doesn’t use words like “reasonable.” It uses words like “demonstrable” and “documented.”

Under NIST principles adapted for legal practice, competence means:

Map: You can identify AI system characteristics, including training data sources, known limitations, and intended use cases. You don’t need to build the model, but you need to understand what it was built from and for. Is this the product you should be using for legal work? Would you use PowerPoint to draft a motion for summary judgement? No. Same thing here.

Here’s a fun metaphor (because, metaphors): Would you use a tack hammer or a piledriver to build a birdhouse? Both create an impact designed to drive metal through materials, but one will destroy what you’re trying to build. Different AI models are similarly specialized. Using ChatGPT for legal research is like using that piledriver on the birdhouse; wrong tool, predictable disaster, Rule 1.1 violation.

Measure: You can evaluate AI outputs for accuracy, bias, and appropriateness. This means knowing enough to test the system, not just trust it. Can you design prompts to test for hallucination? Can you validate citations without running every case?

My favorite testing method is called “Red Teaming” (essentially trying to break the system to understand its limits). It’s enlightening (and fun) to trick AI into doing things it shouldn’t, all in the name of understanding what it actually can and should do for your practice, and using methods to produce better results for your client. 

Manage: You can implement controls appropriate to the risk. High-stakes uses need high-competence users. Using AI for preliminary research with proper legal-specific tools? I think a basic competence is likely if the firm has provided the right tools and training on pragmatic use cases. Using it for jury selection, sentencing recommendations, or compliance analysis? You better understand exactly how that algorithm makes decisions, or you’re violating Rules 1.1, 5.1, and potentially 8.4 (more on this in another post coming soon!).

The Shadow AI Crisis: When Rules 5.1 and 5.2 Collide

Here’s what the “reasonable degree” standard actually creates: a shadow AI epidemic that’s already infecting law firms nationwide. Lawyers who don’t understand AI “to a reasonable degree” are already using it. They’re just using it badly, secretly, and dangerously.

In Litigation: Associates are chunking privileged documents into ChatGPT to “summarize for easier review,” not realizing each chunk creates a new confidentiality exposure (Rule 1.6). They’re generating interrogatory responses with consumer AI, unaware that the responses have context bleed stains all over it.

In Transactional Work: Junior attorneys are using AI trained on public internet data (also known as “unsupervised” models) to draft NDAs and confidentiality agreements, ironically using tools that don’t understand confidentiality to create confidentiality documents. Partners review the work, see familiar legal language, and approve it without knowing it’s statistically generated rather than legally reasoned. (Rule 1.1, 1.6, 5.1, 5.2)

In Compliance: Firms are using AI to analyze regulatory requirements without understanding that the model was last trained before major regulatory updates, and failing to use any retrieval augmentation in generating their analysis. They’re essentially using last year’s map to navigate today’s compliance landscape without updating it accordingly. For context, think pocket parts, but for AI. (Rule 1.1, 5.1).

In Client Communications: Paralegals are using AI to draft client updates, not realizing the model is trained to be agreeable and optimistic—potentially creating unrealistic expectations about case outcomes (Rule 1.4).

Think of this like a senior partner who’s been practicing for 40 years suddenly discovering email. They don’t understand email. They don’t trust email. So they ban email firmwide. Meanwhile, every associate is secretly using Gmail on their phones, forwarding client documents to personal accounts just to get work done. The ban didn’t stop email use; it just removed all oversight and governance process.

Vague competence standards and outright AI bans do not prevent AI use; they drive it underground. When firms can’t define what competence means, employees can’t openly admit what they don’t know and ask for help and training. The result? Shadow AI everywhere, with no oversight, no controls, and no help. It’s Prohibition for the digital age: banning something without addressing the underlying demand just creates a black market. Except instead of bathtub gin, we’re getting bathtub legal briefs.

Defining Minimum Viable AI Literacy for 2025

Defining Minimum Viable AI Literacy for 2025

Let’s stop dancing around “reasonable” and define what lawyers actually need to know right now.

Fundamental Concepts

Attorneys must grasp several core distinctions about AI technology. First, they need to understand how large language models differ from search engines: LLMs predict text patterns rather than finding existing information. This leads directly to understanding why AI generates plausible-sounding false information. “Hallucination” is essentially statistical confidence without factual grounding. Lawyers should also recognize the difference between deterministic and probabilistic outputs, which explains why the same prompt can generate different responses. A basic understanding of training data and its implications is crucial (garbage in, garbage out), and attorneys need to recognize when they’re dealing with “unsupervised” responses. There’s also a potential safe harbor that every attorney should at least understand if they are going to use AI: pragmatic skill-based training on retrieval augmented generation (RAG), which forces models to answer only from your curated corpus and cite their sources.

Practical Skills

Beyond theory, lawyers need hands-on capabilities. Prompt engineering basics make the difference between asking AI to “write a contract” versus instructing it to “draft an employment agreement under New York law with these seventeen specific provisions.” Verification techniques for AI-generated content are non-negotiable. Apply the three-source rule and establish citation validation protocols. Attorneys must recognize common AI failure patterns including temporal confusion, fabricated citations, and overconfidence indicators. They also need to understand confidence indicators and limitations, knowing when AI says “I’m not sure” versus when it should but doesn’t.

Risk Recognition

The stakes are high when AI intersects with legal practice. Attorneys must identify when AI use might waive privilege by understanding input data retention and third-party doctrine implications. They need to grasp what happens to client data after hitting “enter” and the full scope of data retention and confidentiality implications. Most critically, they must know when human review is legally required, recognizing which ethical obligations simply can’t be delegated to machines.

Governance Basics

Effective AI governance requires clear frameworks. This includes establishing training requirements for AI-assisted work: knowing both what to teach and how to teach it. Disclosure obligations to clients and courts must be understood, including when “AI-assisted” must appear in work product.  Finally, AI tools force a Rule 5.1 reckoning: how does a firm supervise something it doesn’t understand? (As explored in “Guardrails Without Governance: Why Minimum Compliance is Maximum Risk,” the answer involves appointing someone who actually can.)

The Path Forward: Specificity or Sanctions

The “reasonable degree” standard is far too vague in this context, and that helps no one. Clients deserve lawyers who actually understand their tools. Lawyers need defined standards that provide real guidance. And the profession can’t afford the reputational damage when AI errors make headlines.

We need better defined competency requirements. Not because AI is magic (it’s not), but because it’s a powerful tool that can cause real harm when misused. This vagueness won’t hold. If the profession doesn’t define AI competence, the courts will (and are) through sanctions, suspensions, and costly lessons firms could have avoided by putting good minds on the front lines of this generational transformation.

The clock is ticking. The shadow AI crisis is growing. And “reasonable degree” is the match that’s about to light this whole thing on fire.

 

ABOUT AUTHOR
Chris D. Warren

Member, Scarinci Hollenbeck, LLC. Partnership and Business Litigation Attorney with Passion for the Nexus between Technology and Ethics in the Legal Profession.