I notice misplaced apostrophes on billboards. I have opinions about the Oxford comma. So when I use em dashes — and I use a lot of them — it's on purpose. I was doing it before the robots got here, and I'll be doing it after.
Last year, the federal government officially removed DEI considerations from its AI policy, framing them as ideological interference with innovation. The message was clear: diversity executives have no place in the AI conversation.
The timing could not be more ironic. Or more costly.
At the exact moment CDOs are being cleared out of the room, AI companies are wrestling with a problem that ethics boards and responsible AI frameworks have struggled to solve: they don't always know what they stand for. And without that foundation, every hard decision gets made in a vacuum. In AI, nearly every decision is a hard decision.
I want to be clear: there are thoughtful people in tech who care deeply about these questions. I have met them. But caring about a problem and having the institutional infrastructure to address it consistently are two different things. What I've observed — across twenty years of sitting in rooms where decisions about people get made — is a pattern worth naming.
The sharpest distinction I can draw between tech culture and institutional culture comes down to this: tech companies tend to be oriented toward output. Did the product ship? Does it work? Is it scaling? These are the questions that drive decisions and shape culture. Output is measurable, immediate, and rewarded.
Institutions are accountable for footprint. Not just what a decision produces, but what it leaves behind. Who was affected. What the ripple effects were. Whether the community still trusts the institution months later. Footprint is slower, harder to measure, and genuinely difficult to optimize for.
AI is the first technology in history that has the output of a tech company and the footprint of an institution. It touches hiring, housing, healthcare, criminal justice, and education — at scale, invisibly, with accountability that hasn't caught up.
That gap between output and footprint is not a technology problem. It is a leadership and accountability problem. And it is the problem that CDOs have spent careers navigating.
I have been in many rooms over the years where an institution was working through a difficult situation — a conflict, a complaint, a community concern. The people in those rooms were smart and well-intentioned. They were focused on the immediate question: how do we address this specific situation as effectively as possible?
The questions I kept bringing were different. How does this response track with how we handled a similar situation in the past? If it doesn't, are we aware of that inconsistency — and what does it signal to the people who were on the other side of the earlier decision? If we believe what we're doing now is right, how do we level set going forward? And perhaps most importantly: how do we address the people who were hurt by the inconsistency, not just the people in front of us today?
These are not questions that resolve themselves. They are questions that require someone whose job it is to hold the institution accountable to its own patterns — someone who is tracking not just the decision in front of them but the cumulative story those decisions are telling over time.
That is institutional conscience. And it is largely absent from how AI companies currently operate.
Most major AI players now have ethics boards, responsible AI teams, or trust and safety functions. These matter. But they are also not the same thing as what I'm describing — and the difference is worth understanding.
Ethics boards tend to be reactive and technical. They get called in after a product is designed to check whether it crosses a line. Trust and safety teams address content after it's published. Responsible AI frameworks audit models after deployment. The orientation is consistent: identify the problem after the fact, contain the damage, move forward.
Institutional conscience is proactive and relational. It means someone is asking whose voice is missing from this conversation before the system gets built. It means there are relationships with skeptical and affected communities that existed before the crisis — so that when something goes wrong, there is trust to draw on rather than a deficit to overcome.
Ethics boards have a mandate. CDOs have relationships. In a genuine crisis, only one of those things actually holds.
If I had to name the single capability I have built over twenty years that I find least represented in fast-moving tech environments, it is this: the ability to hear, process, and synthesize opposing viewpoints — not to resolve them prematurely, not to declare a winner, but to hold the tension between them long enough for something honest to emerge.
This requires having built enough trust with people across difference that they will tell you what they actually think. It requires knowing the difference between discomfort that is productive and distress that needs a different response. And it requires, fundamentally, having a clear sense of what the organization stands for — because without that, there is no ground to stand on when the competing viewpoints can't be reconciled.
AI companies face versions of this challenge constantly. Which communities get prioritized when a model performs differently across demographic groups? Who bears the cost when an automated decision is technically defensible but humanly damaging? These are not engineering questions. They are the kinds of questions CDOs have been answering — imperfectly, under-resourced, with too little organizational authority — for a long time.
The most predictable pushback: AI companies move too fast for the kind of work CDOs do. They are three years old, not three hundred. They don't have entrenched culture or the luxury of slow deliberation.
Speed is precisely when judgment matters most. When you are moving fast, you don't have time to course-correct after something goes wrong. You need someone who has already thought about who gets hurt, who gets left out, and what the organization owes those people — before the decision, not after the headline.
Fast-moving organizations don't have less conflict. They have more, faster, with fewer systems to absorb it. A young company making decisions that affect millions of people has no institutional memory, limited relationships with the communities it is disrupting, and often no one whose explicit job is to ask the uncomfortable question before the product ships. That's not a reason a CDO wouldn't translate there. That's the argument for why one is needed.
If I walked into an AI company tomorrow — call the role whatever is useful, Chief Institutional Conscience, VP of Human Impact, Head of Organizational Trust — the first thing I would do is try to understand what their mission actually is. Not the version on the website. The version that drives decisions when things get hard and the easy answer isn't available.
Because organizations that haven't done that work make every hard decision in a vacuum. They respond to whatever pressure is loudest in the moment. They optimize for output and lose track of footprint until the footprint becomes a crisis. And then they wonder why trust is so hard to rebuild.
The AI companies that earn lasting credibility will be the ones where someone in a position of real authority is asking, consistently, what does this organization owe the people it affects — and holding the institution to the answer even when it's inconvenient.
That is not a new idea. It is a very old one, currently wearing a job title that has become politically fraught. The politics will shift. The need won't.
Anjali writes and speaks on AI accountability, institutional voice, and the future of inclusion leadership. If you're interested in bringing her to your organization, conference, or leadership team, she'd love to hear from you.
Get in touch →Attorney. Chief Diversity Officer. Author of Humanity at Work (#1 Amazon Bestseller). TEDx Speaker. She writes and speaks on AI accountability, institutional voice, and the future of inclusion leadership. Follow on X →
Views expressed are her own and do not represent any employer or institution.