I notice misplaced apostrophes on billboards. I have opinions about the Oxford comma. So when I use em dashes — and I use a lot of them — it's on purpose. I was doing it before the robots got here, and I'll be doing it after.
I want to talk about everyone else.
On February 17, 2026, Judge Jed Rakoff of the Southern District of New York issued a ruling in United States v. Heppner holding that a defendant's conversations with an AI platform — specifically, Claude — are not protected by attorney-client privilege or the work product doctrine. In plain terms: what you tell AI is not private. A court can see it. The government can use it.
While practitioners debate what this means for their own use of AI tools, and Big Law firms send alerts about enterprise versus consumer-grade platforms, there is a quieter and more urgent question sitting just beneath the surface of this ruling.
What about the people who were using AI precisely because they couldn't afford a lawyer in the first place?
Imagine an employee who believes she is being discriminated against at work. She's not sure she has a case. She's not sure she can afford a lawyer. She's not sure she's ready to come forward. So she does what millions of people do every day — she opens an AI platform and starts typing.
She describes what happened. She asks whether it sounds like discrimination. She tries to make sense of her own experience using the only tool that feels accessible, nonjudgmental, and available at midnight when she can't sleep.
She has no idea that those conversations may not be private.
She has no idea that if her employer or the government ever had reason to look, those exchanges could potentially be accessed, subpoenaed, or used.
She is not a securities fraud defendant. She is not a corporate executive. She is someone trying to figure out if her rights were violated — using a tool that feels like a confidential space but legally is not one.
This is the conversation we are not having.
Judge Rakoff's reasoning is worth understanding, even if you're not a lawyer. Attorney-client privilege requires a communication between a client and an attorney, kept confidential, for the purpose of obtaining legal advice. Claude is not an attorney. There is no attorney-client relationship. And critically, Anthropic's own privacy policy puts users on notice that their conversations can be shared with third parties, including governmental regulatory authorities.
The court also rejected work product protection, finding that the documents were created by the defendant on his own initiative, not at the direction of counsel.
The ruling is narrow — it applies to a specific criminal case with specific facts. But its implications are broad. It establishes, for the first time at this level, that AI conversations occupy a legally unprotected space. And it does so by pointing directly to the terms of service that most users never read.
Most users never read the terms of service. That's not naivety. That's just how people use technology — especially people who are scared, overwhelmed, and looking for help at midnight.
Here is what I know from years working at the intersection of law, institutions, and inclusion: the people most likely to rely on AI as a substitute for legal counsel are the people who have the least access to legal counsel.
They are workers who can't afford an attorney. Students navigating institutional power for the first time. People from communities that have historically had every reason to distrust formal legal systems. People who turn to AI not because they are naive, but because it is what is available to them.
Meanwhile, people with resources have lawyers. They have privilege. Literally.
This is not a new pattern. Every time a powerful new technology arrives, the question of who understands its risks — and who doesn't — breaks along familiar lines. The Heppner ruling did not create this inequity. But it illuminates it with unusual clarity.
If you lead an organization, manage people, or set policy, this ruling should prompt some immediate questions.
Do your employees know that AI conversations are not confidential? Do your HR teams understand that workers may be using AI to document complaints, process difficult experiences, or seek quasi-legal guidance — without any of the protections they might assume exist? Do your policies account for a world in which the line between thinking out loud and creating a legal record has effectively disappeared?
These are not hypothetical questions. They are operational ones. And the organizations that navigate this moment well will be the ones that get ahead of it now — before a case in their own institution forces the conversation.
Judge Rakoff concluded his opinion with a note of appropriate humility: generative AI presents a new frontier in the ongoing dialogue between technology and the law.
He is right. And that dialogue is happening right now — in courtrooms and boardrooms and policy chambers — largely without the voices of the people who will be most affected by its outcomes.
I have spent my career arguing that the institutions which last are the ones honest enough to ask hard questions before they are forced to. This is one of those moments.
The question is not just what AI can do. It is who gets to understand what AI does to them — and who gets left figuring it out alone, at midnight, in a conversation they thought was private.
That woman typing at midnight deserves to know the room she's in. Right now, nobody is telling her.
Anjali writes and speaks on AI, equity, institutional accountability, and the future of inclusion. If you're interested in bringing her to your organization, conference, or leadership team, she'd love to hear from you.
Get in touch →Attorney. Chief Diversity Officer. Author of Humanity at Work (#1 Amazon Bestseller). Member of Heterodox Academy and Advisory Board of Class Action. Speaker on AI, civic discourse, viewpoint diversity, and the future of inclusion. Follow on X →
Views expressed are her own and do not represent any employer or institution.