AI & Voice · Series Part 1
Essay · Artificial Intelligence · Civil Liberties

The Silent Gatekeeper: What AI Is Doing to Who Gets Heard

We spent decades fighting to make sure every voice in an institution could be heard. AI is quietly rebuilding the gates we worked so hard to open.

By Anjali Bindra Patel

I notice misplaced apostrophes on billboards. I have opinions about the Oxford comma. So when I use em dashes — and I use a lot of them — it's on purpose. I was doing it before the robots got here, and I'll be doing it after.

Let me start with something I've been sitting with for a while, something that keeps me up at night in a way that partisan arguments about DEI programs don't.

We are about to let algorithms decide whose voice gets heard. And we are doing it quietly, while everyone is busy arguing about whether diversity offices should exist.

I've spent my career fighting for something that sounds simple: every person in an institution should be able to speak, be heard, and be taken seriously. Not just the ones who fit the existing mold. Not just the ones who make leadership comfortable. Everyone. That fight just got significantly more complicated. And I think most of the people who should be paying attention to it — civil liberties advocates, DEI professionals, communications leaders, university administrators — are looking in the wrong direction.

This is the first in a series of pieces I'm writing on AI and voice. Not because I'm a technologist — I'm not. But because I've spent twenty years studying how institutions decide whose ideas get taken seriously, and I'm watching AI become the most powerful gatekeeper we've ever built, with almost no one asking the right questions about it.

What AI Actually Does — In Plain Language

Artificial intelligence, at its core, is a system that learns patterns from existing data and then applies those patterns to new situations. That's it. There's no magic. There's no independent judgment. The system looks at what has happened before and uses it to predict or evaluate what should happen next.

Which sounds reasonable until you stop and think about what "what has happened before" actually means.

It means: who got hired, historically. Who got promoted. Whose writing was deemed professional. Whose communication style was flagged as aggressive or inappropriate. Whose ideas were surfaced in meetings and whose were ignored. Whose résumés made it past the first screen and whose didn't.

All of that history — including all of the bias embedded in it — gets fed into these systems as training data. And then the system learns to replicate it. Not because anyone programmed it to discriminate. But because discrimination was already in the data, and the system is very, very good at finding patterns.

AI doesn't create bias from nothing. It inherits it from us — and then runs it at a scale and speed that no human gatekeeper ever could.

This is already happening. Amazon scrapped its AI hiring tool after engineers discovered it was systematically downgrading résumés from women — not because it was told to, but because it had been trained on historical hiring data that skewed male. A landmark study published in Nature found that language models exhibit covert racism against speakers of African American English, assigning them to lower-prestige jobs and recommending harsher criminal sentences — stereotypes more negative than any recorded in human studies. Research from the University of Washington and Brookings found that AI résumé screening tools favored white-associated names over Black-associated names in 85% of tests across nine different occupations. These aren't hypotheticals. They are documented, peer-reviewed findings.

Why This Is a Free Speech Problem

When I talk about this with colleagues in the civil liberties world, the conversation often goes to discrimination law — Title VII, disparate impact, protected classes. And those frameworks matter. But I want to make a different argument, one that I think gets at something deeper.

This is a free speech problem.

Free expression — real free expression, not just the legal right to speak without government punishment — requires that your words can actually reach an audience. It requires that your ideas have a fair chance of being heard and evaluated on their merits. It requires that the gatekeepers deciding what gets amplified and what gets buried are operating transparently and accountably.

AI systems are becoming those gatekeepers. They decide which job applications get seen by human eyes. Which emails get filtered as spam. Which content gets amplified on platforms and which gets quietly suppressed. Which student essays get flagged for review. Which performance reviews get escalated. Which ideas, in short, make it into the room — and which don't.

When a human discriminates, you can challenge them. You can argue. You can appeal. You can take them to court. When an algorithm discriminates, most people never even know it happened.

That invisibility is what makes this different from every other gatekeeper problem we've faced. Traditional bias leaves traces. People remember being passed over. Patterns become visible over time. Whistleblowers exist. But when the decision is made by a system that processes millions of inputs and outputs a score, the person on the receiving end often has no idea a decision was even made, let alone on what basis.

That is a structural silencing. And it is being built right now, at scale, by people who mostly have good intentions and are mostly not asking the right questions.

The DEI Parallel — and Where It Breaks Down

I want to be honest about something that makes this complicated for those of us who work in diversity and inclusion: we are not innocent bystanders here.

Some of the same assumptions that have plagued DEI work — the tendency to sort people into categories, to assume that demographic representation is the same as inclusion, to prioritize the appearance of fairness over the practice of it — are now being embedded into AI systems, sometimes by people who think they're building more equitable tools.

I've seen AI hiring tools marketed explicitly as DEI solutions — systems designed to remove human bias from the process by automating it. The problem is that automating a biased process doesn't make it fair. It makes it faster and harder to challenge. You've replaced a hiring manager who might be persuaded to reconsider with a system that has already decided, at scale, before the conversation even starts.

Real inclusion has never been about removing humans from decisions. It's been about making humans more accountable for those decisions. The move toward automated systems, if it's not done with extraordinary care and transparency, doesn't advance that goal. It retreats from it.

What Leaders and Institutions Need to Start Asking

I'm not arguing that AI has no place in institutional life. It clearly does, and the efficiency gains are real. What I'm arguing is that efficiency without accountability is just faster bias. And right now, most institutions are adopting AI tools without asking the questions that should be foundational.

Questions every institution should be asking before deploying AI tools

  1. What data was this system trained on, and does that data reflect the historical biases we are trying to correct?
  2. Who is auditing this system's outputs for disparate impact across demographic groups — and how often?
  3. When this system makes a decision that affects a person, is there a human being they can appeal to? Is there a transparent process for doing so?
  4. Does this system penalize communication styles, credentials, or backgrounds that are underrepresented in its training data?
  5. Who is legally and ethically accountable when this system produces a discriminatory outcome?
  6. Are the people most likely to be affected by this system — the employees, students, or applicants it evaluates — included in the conversation about whether and how to deploy it?

These are not technical questions. They are governance questions. Leadership questions. The kind of questions that a Chief Diversity Officer, a General Counsel, a Chief Communications Officer, and a Chief People Officer should all be in the room asking together — before the system goes live, not after the first lawsuit.

The Accountability Gap

Here is where I think the real opportunity is, and where I think institutions are most exposed: there is almost no one in most organizations whose job it is to sit at the intersection of AI deployment, civil liberties, and institutional voice.

The technologists building these systems are often not asking the DEI questions. The DEI professionals are often not fluent enough in how these systems work to know what to ask. The communications leaders are often brought in after decisions have already been made, to manage the narrative rather than shape the policy. And the legal team is waiting to see what gets challenged in court.

Nobody is minding the whole store.

The question of who gets heard in an institution has always been a leadership question. AI just made it more urgent — and a lot harder to see.

That gap is going to become increasingly expensive — reputationally, legally, and culturally — as AI systems become more embedded in how institutions function. The organizations that get ahead of it will be the ones that treat AI governance not as an IT problem but as a civil liberties and communications problem. The ones that bring the right people into the room before the system is built, not after the headline appears.

Why I'm Writing About This

I want to be transparent about something. I'm writing this series not just because I think these questions matter — though I do, genuinely — but because I think the people who have spent their careers working on voice, inclusion, and institutional accountability have something specific and important to contribute to the AI conversation that is not currently being contributed.

We know what structural silencing looks like. We've spent decades documenting it, challenging it, and building systems to counteract it. That expertise is directly relevant to what AI systems are doing right now. But it's mostly not in the room where AI decisions are being made.

It should be. And I intend to keep saying so.

The next piece in this series will look at a specific place where this is already playing out: AI in hiring, and what the documented evidence actually shows about whose opportunities are being shaped by systems that most candidates never know exist.

We built DEI to open doors. We cannot now stand by while algorithms quietly close them. Not without asking who made that decision — and demanding an answer.

Speaking & Consulting

Anjali writes and speaks on AI, equity, institutional accountability, and the future of inclusion. If you're interested in bringing her to your organization, conference, or leadership team, she'd love to hear from you.

Get in touch →

Anjali Bindra Patel

Attorney. Chief Diversity Officer. Author of Humanity at Work (#1 Amazon Bestseller). Member of Heterodox Academy and Advisory Board of Class Action. Speaker on civic discourse, viewpoint diversity, AI accountability, and the future of inclusion. Follow on X →

Views expressed are her own and do not represent any employer or institution.

More Writing

Essay · Personal

Storytellers Are Builders

Essay · Leadership

What the Navy SEALs Know About Inclusion That Corporate America Doesn't

Essay · Civic Discourse

The Divide Isn't Left vs. Right. It's Old vs. Young.

View All Articles →