Moral Machine Setlist
This posts song writes itself: Dun dun… dun dun DUN dun…
Winter isn’t coming — it’s already here. And this time, it’s armed with Harvey and a syllabus.
Harvey just embedded its AI platform into six elite law schools.
My first thought: Winter is coming, and most practicing lawyers are still arguing about whether White Walkers exist.
Law students will graduate fluent in AI tools. Not dabbling, not cautiously experimenting, but trained from day one that competence under RPC 1.1 includes AI. Meanwhile, many practicing attorneys are still clinging to the “reasonable degree” loophole, pretending they can opt out of understanding how this tech works and just ignore it.
That’s not just naïve. It’s dangerous.
The Competence Divide
Let’s be blunt: the next generation of lawyers will expect AI to be as natural as Shepardizing or running a Westlaw search. They’ll draft, research, and analyze with it as second nature. If bar associations once debated whether email counted as “reasonably necessary” to practice, imagine the malpractice landscape when a 27-year-old associate outperforms a partner with 30 years of experience because she actually knows how to use available legal technology.
Picture this: A first-year associate drafts a motion for summary judgment in two hours that’s more thorough, better researched, and more persuasively written than what a senior partner produces in ten hours. That’s not efficiency; that’s replacement. The competence gap isn’t theoretical. It’s a feast waiting to happen. And the crows are already circling.
Law Schools Are Drawing the Battle Lines
The Harvey alliance sends a signal: competence in AI is no longer elective. It’s required. Stanford and Notre Dame aren’t waiting for bar committees to debate the issue. They’re not waiting for another toothless ethics opinion in 2028. They’re creating lawyers who will make current practitioners look like they’re still using quill, parchment, and dictation machines.
So where does that leave today’s practicing bar? Lawyers who insist that AI is optional are effectively saying they don’t need to know the common language of the next generation. That’s like a Maester (unless you are Aemon, but that’s a different thing) refusing to learn to read. You’re not just behind, you’re obsolete.
The Ethical Reckoning
Here’s where RPC 1.1 becomes a sword hanging over every practicing lawyer’s head. The rule doesn’t say “be competent in the tools you personally prefer.” It demands competence in “the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”
When every new law graduate uses AI as naturally as breathing, how do you argue it’s not “reasonably necessary”? RPC 1.1 doesn’t care that you’ve been practicing since before email. It doesn’t give you a grandfather clause because you’re a name partner. It demands competence in the tools necessary for representation. And Harvey’s (brilliant) move just forced that evolution, just like Lexis and Westlaw did decades ago.
If students are trained to use AI tools as part of baseline competence, can lawyers in practice really justify staying ignorant? When your opposing counsel, fresh out of Michigan Law, files briefs in a fraction of the time with double the authority and analysis, your client won’t care that you’re “old school.” They’ll care that you lost.
The truth is stark (sorry, I couldn’t resist): competence has always been a moving target, but Harvey just moved the goalposts into another kingdom.
The Malpractice Storm
Let’s play this out. It’s 2027. A client loses a case because their lawyer missed critical precedents that Harvey-trained opposing counsel found in seconds. The client discovers their attorney doesn’t use AI “on principle” or because “they don’t trust it.” That’s not principle anymore, that now became, potentially, malpractice.
The plaintiff’s bar is watching this space. They’re already arguing (and winning) that “failure to use available technology competently” is as actionable as “failure to meet a deadline.” And unlike missing a deadline, which might get you one bad outcome, technological incompetence is a pattern. It’s provable. It’s systematic. It’s also going to be profitable, for the lawyers suing you.
This pattern has played out before. McDermott Will & Emery’s e-discovery failure spilled thousands of privileged documents to opposing counsel, a cautionary tale of what happens when “reasonable” competence meets complex technology. A quick search of Above the Law will reveal story after story of lawyers that would likely have fit into the “reasonable” competence standard, but not competent in the use of technology. That wasn’t strategy; it was leaving the castle gates open while crows swarmed in.
The lesson for AI? You wouldn’t let a dragon loose without a chain. If you’re not mapping, measuring, and managing AI, if you’re not verifying outputs and teaching your attorneys how to competently use these tools, you’re not guarding the realm, firm’s are actively deciding to leave the malpractice gates wide open.
Conclusion: Adapt or Be Devoured
Game of Thrones gave us a simple lesson: winter (change) is coming. And in this case, winter looks like a class of law grads fluent in AI, marching into firms where partners still think ChatGPT is “that thing their kids use for homework.”
The Harvey generation isn’t coming with torches and pitchforks. They’re coming with tools that make current practice look medieval. And when they arrive, they won’t be offering to teach you. They’ll be taking your clients.
To borrow from the realm: When you play the game of competence, you win or you die.
Choose wisely. Because winter isn’t coming, it’s already here. The first Harvey-trained graduates enter the workforce in 2027. That’s less time than it takes to make senior associate. The wall is cracking, the crows are circling, and the first Harvey-trained grads are already sharpening their blades.
Competence isn’t optional anymore, it’s survival.

