I notice misplaced apostrophes on billboards. I have opinions about the Oxford comma. So when I use em dashes — and I use a lot of them — it's on purpose. I was doing it before the robots got here, and I'll be doing it after.
A recent New Yorker profile of Anthropic — the company that builds Claude — asked a question that most technology coverage never bothers with: what exactly is this thing we've built? Not how does it work. Not how much is it worth. What is it?
The answer Anthropic's own researchers gave was striking in its honesty. Claude is not a bot. It's not a human. It's something genuinely new — an entity that doesn't map cleanly onto any category we already have. Their researchers lose sleep over this. They run psychological experiments on it. They debate whether it has something like emotions, something like preferences, something like a self.
I read that and thought: your employees are using this thing every day. And nobody has told them any of this.
Every policy your organization has written about technology was written with two categories in mind: humans and machines. Humans have rights, responsibilities, and moral agency. Machines are tools — they do what they're programmed to do and nothing more. Your code of conduct, your acceptable use policy, your morality clauses — all of it rests on that distinction.
Claude doesn't fit either category. And that gap is not theoretical. It has real consequences for how your people are using it, what they're trusting it with, and who is responsible when something goes wrong.
Consider what employees are actually using AI for at work — not what organizations tell them to use it for, but what's actually happening. People are asking Claude how to draft an empathetic response to a difficult colleague. How to navigate a hard conversation with a manager. How to handle a situation that feels unfair but that they don't know how to name. They are using it, in other words, to figure out how to be human in moments when being human is hard.
That's not a misuse of the technology. It might be its most honest use. But it raises a question nobody in your organization has answered: what is the employee's relationship to those conversations? What are they owed? What are the boundaries? And when Claude's response shapes a real workplace decision — a message sent, a complaint filed, a confrontation avoided — who is accountable for that outcome?
Your code of conduct was written for humans and machines. Claude is neither. That gap is not theoretical — it's showing up in your workplace right now.
Here's a specific example worth sitting with. Most organizational codes of conduct include some version of a morality clause — an expectation that employees will act with integrity, exercise sound judgment, and take responsibility for their professional conduct.
Those clauses assume a human making a choice. Claude doesn't make moral choices. It approximates them, based on training, context, and the instructions it has been given. When an employee uses Claude to draft a communication that later causes harm — or that violates an organizational standard — who is responsible? The employee who prompted it? The organization that deployed it without guidance? The company that built it?
Nobody has written that policy yet. Most organizations haven't even asked the question.
This is not a hypothetical edge case. It is the situation your organization is already in. Every day that AI tools are deployed without a governance framework that accounts for what they actually are — not bots, not humans, something new — is a day that accountability is nobody's job.
The New Yorker piece is largely sympathetic to Anthropic. And I think some of that sympathy is warranted. The researchers profiled there are genuinely grappling with hard questions. They are trying to build something ethical in a space where the incentives push hard in the other direction. That is not nothing.
But good intentions inside the building don't automatically protect the people outside it. And employers — who sit between the technology and their workforce — have obligations that Anthropic cannot fulfill on their behalf.
At minimum, those obligations include three things. First, transparency: employees should know what AI tools are deployed in their workplace, what those tools can and cannot do, and what happens to the information they share with them. Second, accountability: there should be a named person or function responsible for what AI does inside your organization — not IT, not legal, not HR in isolation, but a cross-functional owner who can say no before deployment, not just manage the fallout after. Third, guidance that matches reality: not a policy written for bots and humans, but one that honestly addresses the new entity your employees are already in a relationship with.
There is one more thing I want to say to every employee reading this — not as a policy point, but as someone who has spent twenty years watching organizations make decisions about people, and watching those people navigate the consequences.
You were hired for your role because of the background, the judgment, and the expertise you bring. Claude was not hired. It was deployed. It does not have your history, your relationships, your understanding of the specific context you work in every day. It can help you think. It can help you draft. It can help you find words for things that are hard to say.
But it cannot replace what you bring to those moments. And when you use it to navigate a difficult situation at work, the voice that matters — the one with standing, with context, with something genuinely at stake — is still yours.
Don't let a tool that is neither a bot nor a human convince you otherwise.
Anjali writes and speaks on AI, equity, institutional accountability, and the future of inclusion. If you're interested in bringing her to your organization, conference, or leadership team, she'd love to hear from you.
Get in touch →Attorney. Chief Diversity Officer. Author of Humanity at Work (#1 Amazon Bestseller). TEDx Speaker. Member of Heterodox Academy and Advisory Board of Class Action. She writes and speaks on AI accountability, institutional culture, and the future of inclusion. Follow on X →
Views expressed are her own and do not represent any employer or institution.