The ChatGPT Panopticon: Black Mirror Meets Rule 1.6

Rockwell Watching Me

Rockwell was wrong: it’s not just a feeling. When you use public ChatGPT for client work, somebody really is watching. And keeping receipts. In 1984, ‘somebody’s watching me’ was paranoia. In 2025, it’s Rule 1.6 compliance.

If you practice law in 2025, assume a simple truth about public AI tools: somebody is watching. Or at least, recording. As Rockwell put it, “I always feel like somebody’s watching me.” That is the right inner monologue for Rule 1.6 in the age of ChatGPT.

Black Mirror for Rule 1.6

In Black Mirror: The Entire History of You, every moment can be replayed later. That is how you should treat public ChatGPT. Inputs are not just passing through. They (by court Order) live in logs until further notice. Even if “training” is off, retention is a completely different kettle of fish.

This is not hypothetical. In the consolidated litigation In re: OpenAI, Inc., Copyright Infringement Litigation, the court directed OpenAI to keep certain data that otherwise would be deleted. Right or wrong, Orwellian or not, overly broad or as insane as that sounds, that’s the current state of anything you type into ChatGPT.

Specifically, in the Southern District of New York, Judge Wang ordered: “OpenAI is NOW DIRECTED to preserve and segregate all output log data that would otherwise be deleted … whether such data might be deleted at a user’s request or because of ‘numerous privacy laws and regulations.’” And yes, the Court actually bolded and used caps.

Treat that as your reality check. If you paste client confidences into a public GPT, they’re permanent. Judge Wang wants everyone to know (see footnote 2, June 20, 2025) that she definitely hasn’t created a nationwide mass surveillance program. Definitely not. The fact that she’s ordering the preservation of everything users explicitly tried to delete? The part where individual privacy rights just became, you know, optional? That’s totally different from surveillance, apparently. But whatever semantic gymnastics we’re doing here, the result is the same: your client’s data is now part of the permanent record. Nothing to see here, citizen. Move along. Your (and maybe your client’s data) isn’t moving anywhere.

What Rule 1.6 Requires in this Dystopian Timeline

Rule 1.6 protects information relating to the representation. It is broader than secrets or privilege. Your duty is to make reasonable efforts to prevent inadvertent or unauthorized disclosure. In 2025, that includes understanding where your AI tool sends data, who can access it, how long it is retained, and what you can prove to a judge later.

“Training off” is not enough. The May 13, 2025 preservation order is about retention, not model training (despite the lawsuit being about training OpenAI Models, but whatever), and specifically addresses keeping “output log data that would otherwise be deleted,” including data a user asked to delete.

And discovery just got interesting. Reuters says the quiet part out loud: “GAI prompts and outputs may be considered unique information that must be preserved for litigation.” Meaning? Every ChatGPT prompt is a future subpoena waiting to happen. Prosecutors, divorce lawyers, big corporate litigators, they’re all thinking the same thing: OpenAI’s data is the discovery gift that can potentially keep on giving. Your late-night strategy session with ChatGPT? That’s tomorrow’s Exhibit A. Black Mirror called it dystopia. Good thing Judge Wang most definitely did not create a nationwide mass surveillance program, she said so.

Given the legal posture and ordinary logging, the default should be simple:

  • Do not put client confidences into public ChatGPT or public Store GPTs you did not build and govern. That includes names, unique fact patterns, privileged text, discovery, confidential deal terms, and identifiers.
  • Assume there is some form of retention in the chain. If you would not project the text on a wall in court, it does not belong in a public chatbot.

GPT Has Some Protection

You can still use OpenAI products and sleep at night. Pick one of these lanes:

Enterprise-grade workspace with enforceable controls

Use a business workspace that carves customer data out of model training by default and gives you admin control over sharing, connectors, retention, and audit logs. Most law firms and businesses are not in this boat as they are not creating their own enterprise level bespoke GPTs.

OpenAI API with Zero-Data-Retention (ZDR)

For matters requiring client confidentiality (which goes beyond just privilege), utilize software which calls the API with ZDR policies to ensure inputs and outputs never touch vendor logs. Several AI products built specifically for legal work already offer this protection, including the usual suspects: Harvey, Lexis AI, and Westlaw.

Defensible AI Governance Firms Can Adopt

Start with platform selection and only use APIs with zero-data-retention. Your vendor contracts need specific terms: no training on client data, explicit retention limits, breach notification requirements, audit rights, and appropriate data residency controls. For quality control, log all prompts and outputs used on client work in your own system, sample monthly for compliance, and require human involvement in this process. Finally, be aware of client obligations in your jurisdiction, as well as specific Court requirements.

Expectations of Privacy… For Now

Rockwell’s paranoid anthem still resonates because it nailed that creepy feeling of being watched when you thought you were alone. In today’s environment, especially after Judge Wang’s Order, that paranoia is just good practice. Unless you can prove otherwise, assume digital eyes are always on you. But here’s the good news: use enterprise controls or ZDR and you can (sort of) stop looking over your shoulder. No more wondering who’s watching. No more surveillance anxiety ruining your workflow. Just you, your client’s data, and actual privacy. You (and Rockwell) can (for now) finally enjoy your tea in peace. The way it should be.

Moral Machine Podcast – AI Data Privacy, Data Security, Compliance & Shadow AI

Hosted by Christopher D. Warren, this podcast explores the intersection of AI governance, ethics, law, and technology—turning complex issues into clear, defensible guidance for practitioners and leaders. In this episode, Christopher is joined by Cathy Miron, CEO of eSilo and a data protection and cybersecurity expert, to examine how AI in society is reshaping risk, compliance, and practice with actionable takeaways.

ABOUT AUTHOR
Chris D. Warren

Member, Scarinci Hollenbeck, LLC. Partnership and Business Litigation Attorney with Passion for the Nexus between Technology and Ethics in the Legal Profession.