I notice misplaced apostrophes on billboards. I have opinions about the Oxford comma. So when I use em dashes — and I use a lot of them — it's on purpose. I was doing it before the robots got here, and I'll be doing it after.
Think about what it feels like to apply for a job and not hear back.
Most people assume the same thing. Not good enough. Too old. Too much experience. Not the right fit. You run through the list in your head and eventually you move on, carrying a quiet accumulation of self-doubt that nobody ever quite names out loud.
Now imagine that happening one hundred times.
That is Derek Mobley's story. Over 100 job applications. Over 100 rejections. And according to the lawsuit he filed — Mobley v. Workday — every single one of those employers used the same AI hiring platform to screen him out. Workday has not been found liable. The case is ongoing. But a federal court has said it can move forward. And this Sunday, March 7th, the deadline to opt in as a potential class member closes.
Most people who went through a Workday hiring process and never heard back don't know any of this exists. They just assumed they weren't good enough.
That assumption — that the system is neutral and the problem is you — is exactly what algorithmic discrimination depends on.
The lawsuit alleges that Workday's AI screening platform ranked or rejected applicants in ways that disproportionately affected older workers — those over 40 — in potential violation of the Age Discrimination in Employment Act. The complaint also raises claims related to race and disability discrimination.
Workday is one of the most widely used HR platforms in the world. When you apply for a job at a major company and submit your résumé through an online portal, there's a reasonable chance Workday is processing it before any human being ever sees your name.
That's not a problem in itself. The problem is what happens when that processing reflects the same biases that human hiring always has — but faster, at greater scale, and with almost no transparency about how decisions are being made.
When a hiring manager passes on your résumé, there's at least a human being who made a choice. You can imagine confronting them. You can file a complaint. You can build a pattern of evidence over time. The law has mechanisms, however imperfect, for challenging human discrimination.
When an algorithm makes the decision, most people never even know it happened. There is no face. No moment. Just silence — and the assumption that the silence means something about you.
I've been writing about AI and institutional power for the past several months, and what strikes me most about Mobley v. Workday is not the specific allegations. It's the fact that it got this far.
Challenging algorithmic discrimination in court is extraordinarily difficult. These systems are proprietary — companies are not required to open their algorithms to scrutiny. The people harmed often don't know they've been harmed. And even when they suspect something went wrong, connecting their individual experience to a pattern requires the kind of data that only the company possesses.
The fact that a federal court has authorized notice to potential class members means the court has found enough here to let the case proceed. That is not a finding of liability. But it is the legal system saying: this question deserves an answer.
For twenty years I have worked on inequity that is structural, invisible, and deniable. AI doesn't create that problem. It inherits it. And then scales it at a speed no human gatekeeper ever could.
That's what makes algorithmic discrimination so hard to fight. And so important to name.
Derek Mobley has a name and a lawsuit. But for every Derek Mobley, there are thousands of people who applied through the same platform, got the same silence, and never connected those dots. They're not plaintiffs. They're not part of any class action. They just stopped applying, or took jobs that weren't right for them, or quietly concluded that the market had decided something about their worth.
This is who I think about when I think about AI and employment. Not the executives debating whether to deploy AI hiring tools. Not the lawyers arguing about liability standards. The person at 55 who was laid off and can't figure out why every application disappears into a void. The person who suspects age has something to do with it but can't prove it and doesn't know where to start.
The law has always protected these people imperfectly. But it has protected them. The question Mobley v. Workday is really asking is whether those protections survive when the decision-maker is a machine.
Whether or not this case ultimately succeeds, it is already doing something important: it is making the invisible visible. It is forcing a conversation about what happens when the systems that control access to opportunity are allowed to operate without transparency or accountability.
Every organization using AI hiring tools right now should be asking the questions this lawsuit raises. Not because they expect to be sued — though that risk is real and growing — but because the purpose of a hiring process is to find the best people. An algorithm that systematically screens out qualified candidates based on age, race, or disability isn't just legally risky. It's bad for the organization. It is building teams that look like the past instead of the future.
And for anyone who has applied for jobs through Workday since September 2020 and is 40 or older — you may have a stake in this case. The deadline is this Sunday. It costs nothing to find out.
If you applied for jobs through Workday after September 2020 and were 40 or older at the time, you may be eligible to join the Mobley v. Workday class action. A federal court has authorized notice to potential class members. The opt-in deadline is this Sunday, March 7th.
Visit workdaycase.com for details →
This is not legal advice. Consult an attorney with questions about your specific situation.
Derek Mobley applied to over 100 jobs. He kept going when most people would have stopped. His persistence created a record. That record became a lawsuit. That lawsuit is now forcing a federal court to grapple with one of the hardest questions of the AI era.
The least we can do is pay attention.
Anjali writes and speaks on AI, equity, institutional accountability, and the future of inclusion. If you're interested in bringing her to your organization, conference, or leadership team, she'd love to hear from you.
Get in touch →Attorney. Chief Diversity Officer. Author of Humanity at Work (#1 Amazon Bestseller). Member of Heterodox Academy and Advisory Board of Class Action. Speaker on AI, civic discourse, viewpoint diversity, and the future of inclusion. Follow on X →
Views expressed are her own and do not represent any employer or institution.