When Ethics Opinions Play Catch-Up: A Critique of NYC Bar Formal Opinion 2024-5 on Generative AI

Moral Machine Setlist

 

That’s all from the Moral Machine. Because even lawyers arguing with robots need a playlist, here is once in a lifetime by the talking heads.

The NYC Bar issued its guidance on generative AI a little over a year ago, and it’s exactly what you’d expect from a thoughtful committee trying to guide lawyers through uncharted territory: careful, comprehensive, and maybe a bit too comfortable with ambiguity.

For a technology that’s already reshaping law practice, “guardrails, not hard-and-fast rules” makes sense in theory. But in practice? Lawyers need more than theoretical frameworks; they need actionable guidance. This is especially true given how quickly new models are released and the magnitude of power those increases represent. For example, at the time 2024-5 was written, OpenAI had just launched 4o to replace 3.5. Just weeks ago, GPT-5 was released, which is smarter, cheaper, more versatile, and already integrated into systems like Harvey, which have also seen nearly a four-fold increase in usage during that same period.

As an attorney passionate about protecting our profession from unseen ethical pitfalls as AI rapidly integrates into practice, I’m enthusiastically joining the Supreme Court of New Jersey District VI Ethics Committee for the 2025-2029 term and pursuing my AI Governance (AIGP) and Privacy Professional (CIPP/US) certifications. I read this opinion with both appreciation and a little frustration. The Bar clearly put serious thought into this, bent the knee a bit too much to California, and still leaves critical questions unanswered.

Guardrails vs. Rules: The Challenge of Moving Targets

The opinion begins by choosing “guardrails” because AI is developing too quickly to regulate. I get it, the Bar is trying to avoid rules that’ll be obsolete by next week. But here’s the thing: that’s precisely why we need governance frameworks, not just obscure warnings. Frameworks adapt. They scale. They provide structure without rigidity. This is something new in our industry, and it requires a new way of thinking to apply the rules and create processes that protect our clients, and bolster our ethical obligations to them, not weaken them.

In practice, “guardrails” often mean firms default to minimum compliance, or worse, refusal to integrate processes and training into their firms, which sets everyone up for confusion, failure, the use of shadow AI, and potentially, a bench slap. What would help? Concrete governance frameworks: documented use policies, accountability processes, and maybe, you know, practical training. 

As attorneys often do, we utilize work-product from sources to build on, applying our specific facts to tailor to our needs. I don’t see why AI is any different. The NIST AI Risk Management Framework offers a blueprint that is already widely being adopted by other industries, and “Map, Measure, and Manage” are catchy enough that even a busy attorney can remember them. The opinion could have pointed lawyers toward these resources. Instead, we get guardrails, which is helpful, but not really at the same time.

Confidentiality: The Elephant in the Cloud

The opinion rightly warns lawyers not to input confidential client information into “open” AI systems that may reuse data. Fine. But it assumes lawyers will type client secrets into ChatGPT and then try to mitigate. That’s like advising someone to jump into the Hudson River but wear a wetsuit.

Consider a lawyer using AI to draft an agreement. The opinion tells them not to input “confidential information” but doesn’t clarify whether deal terms, company valuations, or even party names qualify. That ambiguity matters when a mistaken disclosure could trigger insider trading investigations.

What’s missing is a proactive mandate: due diligence in AI procurement. Firms should vet vendors for security certifications for SOC 2 and ISO/IEC 42001 compliance before a single keystroke. And let’s not ignore privilege. Uploading client data into a third-party AI system without airtight contractual protections risks waiving privilege entirely… something this opinion barely acknowledges. In fact, there have been some very new developments since this opinion was published regarding the retention of data stored by companies like OpenAI. Without going into the details here (we can save that for another day); it’s very bad. 

If we’re serious about confidentiality, the standard shouldn’t be “don’t input secrets without consent.” It should be: lawyers must not use AI systems that cannot contractually guarantee confidentiality and privilege. Anything less is malpractice waiting to happen.

Competence and Diligence: “Reasonable Degree” Needs Definition

The opinion says lawyers should understand “to a reasonable degree” how AI works. That’s the kind of language that sounds sensible until you’re in a deposition trying to explain what “reasonable” meant. Most attorneys likely understand how Microsoft excel works “to a reasonable degree” but I’d bet lunch at Keens that a substantial majority of attorneys don’t know what a pivot table is, how to make one, or could even explain it to a “reasonable degree.”

Let’s be honest about the language here: the opinion’s “reasonable degree” standard for AI competence wouldn’t fly in any other context. We don’t tell lawyers they need to understand securities law to a “reasonable degree” before handling an IPO. We expect competence. Why should AI, of which there are now hundreds of products available on any internet enabled device, can fabricate cases, encode bias, and waive privilege, get a vaguer standard without any framework to which a “reasonable degree” of competence can be achieved. 

Other professions are setting the bar higher. Financial firms must train staff on model risk management. Healthcare providers face AI literacy requirements. Why should lawyers (and primarily law firms) whose decisions affect liberty, finances, and justice, just get a pass. 

Supervision: Who Actually Owns Oversight?

The opinion correctly notes AI should be supervised like a junior associate. But it sidesteps the practical question of who in the firm owns AI oversight?

Is it the managing partner? The (non-attorney) IT director? The ethics partner? Without role-based accountability, everyone assumes someone else is handling it or is meeting the “reasonable degree” standard with their required one hour CLE course on AI usage. This is where a pragmatic AI governance framework can really shine: define roles, map out a process, monitor use, and measure outcomes. 

Every firm using AI could benefit from designating a responsible attorney for AI oversight, training, metrics, and compliance. The opinion mentions supervision but could have been more prescriptive about structure. In practice, firms will dodge this responsibility and simply blanket ban the use of AI, which is the equivalent of closing your eyes, putting your fingers in your ears, and saying, “lalalala” loudly. 

Hallucinations, Candor, and Tomorrow’s Deepfakes

Yes, general-use AI (like GPT) will hallucinate cases. Yes, lawyers must check citations. The opinion appropriately cites Mata v. Avianca. But here’s the thing: case hallucinations aren’t really a problem if you’re using software built for lawyers. Lexis and Westlaw don’t hallucinate cases. The “AI hallucination” panic is yesterday’s problem, a rallying war cry for attorneys shaking their fists at clouds. Tomorrow’s problem is far more chilling: deepfakes.

AI can now generate doctored evidence, fabricated emails, or synthetic documentary evidencethat would fool most lawyers. And let’s be honest: if many attorneys still struggle to handle pivot tables or understand what metadata is and how to access it, how prepared are they to authenticate an AI-generated video?

The opinion doesn’t address AI-tainted evidence or deepfakes at all. Should we be inquiring into the authenticity of client-provided videos? Do we have a duty to disclose if we suspect manipulation? What about when opposing counsel submits a suspiciously perfect smoking-gun email? These are emerging candor issues that deserve more attention than the red herring of hallucinated case law. Hallucinations make headlines and provide fodder for the pearl-clutchers. Deepfakes will make case law.

Fees and Billing: Time to Rethink the Model?

The opinion’s billing guidance addresses current practice but might be missing the forest for the trees. It tells us AI is a time-saving tool, like switching from typewriter to Word processor. But AI isn’t just faster, it’s fundamentally different. When AI can draft the first version of a contract in seconds instead of what would have taken hours, we’re not talking about efficiency anymore. We’re talking about a new service model.

The opinion says you can’t bill clients for time AI saved you. Fair enough under current billing models. But maybe it’s time to have a bigger conversation: the billable hour might not be the best framework for AI-enhanced legal work. Clients are increasingly asking for pricing that reflects value delivered, not time spent. The opinion could have nudged us toward that future the same way that the dark year of 2020 made “zoom court” a thing.

What They Don’t Say: The Future Blind Spots

As I go through my own journey in AI Governance, what jumps out at me the most is what the opinion does not touch:

    • Law firm AI policies: no mention of standards, auditing, or disclosures to clients.

    • Mandatory AI training: not a word about requiring baseline literacy for practicing lawyers, despite clear competence obligations for the individual attorneys, or the supervision obligations of the law firm.

    • Judicial use of AI: judges will face the same risks, yet this opinion punts entirely.

    • Insurance implications: no guidance on whether standard malpractice policies even cover AI-related errors.

In short, the opinion is a checklist, not a roadmap. It tells you what not to do, but not how to build sustainable, ethical AI practice. 

In closing: Building momentum going forward

Generative AI isn’t emerging anymore. AI is here, embedded in our practice. According to recent surveys, more than half of large firms are already using it. Firms can be assured that if they decide to “not permit” the use of AI, their attorneys are still using it. The NYC Bar deserves credit for tackling this complex topic and providing thoughtful initial guidance, but we need to build on this foundation. The best ethics guidance evolves through dialogue, and this opinion opens that door.

Some food for thought: implementation of a version of NIST’s framework; designated AI resource attorneys at the firm level who actually understand AI beyond a “reasonable degree”; practical training on the actual tools lawyers use, not theoretical AI concepts; and value-based billing models that align cost with outcomes, not hours.

By the time the next ethics opinion drops, GPT-6 will probably be drafting our briefs. Let’s make sure we’re ready.

 

 

ABOUT AUTHOR
Chris D. Warren

Member, Scarinci Hollenbeck, LLC. Partnership and Business Litigation Attorney with Passion for the Nexus between Technology and Ethics in the Legal Profession.