I notice misplaced apostrophes on billboards. I have opinions about the Oxford comma. So when I use em dashes — and I use a lot of them — it's on purpose. I was doing it before the robots got here, and I'll be doing it after.
In the summer of 2025, the Algorithmic Justice League published the results of a two-year effort to document how American travelers actually experienced TSA's facial recognition program. Researchers collected scorecards from hundreds of people across 91 airports. The technology itself has been debated in Congress, studied by NIST, and covered in the Washington Post. What was missing was something simpler: what did people actually understand about what was happening to them?
The answer was almost nothing.
Ninety-nine percent of respondents said no TSA officer verbally told them they could decline the face scan. Nearly three in four said they received no notice the scanning was happening at all. Some who tried to opt out faced open hostility. A program the agency calls voluntary was experienced, almost universally, as something else entirely.
This is the consent gap. And I can tell you from my work leading equity and inclusion at a law school that it is not unique to airports.
TSA's facial recognition program has expanded from a handful of checkpoints in 2023 to more than 2,100 devices across 250 airports, with plans to reach 430. The agency describes it as a pilot that enhances efficiency. A bipartisan group of twelve senators wrote to the DHS Inspector General in late 2024 to note that TSA had not actually provided Congress with evidence the technology catches fraudulent documents, reduces wait times, or stops threats. The government's own Privacy and Civil Liberties Oversight Board, in a May 2025 report, identified significant gaps in transparency, individual rights, and safeguards against scope creep.
What the PCLOB report could not capture was the lived texture of those gaps. And that is what AJL set out to document.
Travelers don't encounter a policy. They encounter a person in a uniform, a machine they don't have a name for (TSA calls it a CAT-2 unit), and signage describing something called "biometric identification verification." None of those terms signal to an ordinary traveler that a camera is generating a mathematical map of their face and comparing it against a federal database. The right to opt out exists in TSA guidance. The infrastructure for that right to be meaningfully exercised does not.
I recognize this pattern immediately. In equity work, we call it the gap between formal access and meaningful access. We know what it looks like when a polling place is technically open but practically inaccessible. We know what it looks like when a workplace grievance process exists on paper but carries enough social cost to be unusable. TSA's consent architecture follows the same logic, and it deserves the same scrutiny.
The AJL report is not a generalized argument against facial recognition. It is a documentation of who pays when institutions deploy powerful AI systems without equity-centered governance.
The answer, as with most governance failures, is not everyone equally.
According to a 2024 evaluation of the algorithm TSA uses for identity verification, the false positive rate — being wrongly flagged as not matching your own credentials — was 5.36 times higher for West African women over 65 than for Central American males aged 12 to 20. West African individuals experienced false negatives at roughly 9% above the population average. These are not hypothetical disparities. They translate into missed flights, confrontations with security officers, and in the worst cases, encounters with law enforcement that began because an algorithm made a mistake.
Trans travelers face a particular vulnerability the report documents carefully. Fifty-nine percent of respondents in a 2022 U.S. Transgender Survey said none of their IDs listed the gender they wanted. Facial recognition systems trained on binary gender classifications, matching against identification documents that may not reflect how someone looks, generate errors at higher rates for trans individuals — and those errors happen in some of the most high-stakes, public, and coercive environments imaginable.
These are predictable outcomes of building a powerful identification system without asking, at the design stage, who will be most harmed when the system fails — and without building the accountability structures to catch and correct those failures in real time.
I want to be direct here, because higher education has largely been missing from this conversation and it should not be.
During the pandemic, colleges and universities deployed AI-powered proctoring software at extraordinary scale. Tools like Proctorio and ExamSoft monitored students through their own webcams, flagging behaviors the algorithm interpreted as suspicious: looking away from the screen, moving their lips, having ambient noise in the room. Students were not meaningfully told how the systems worked, what data was being collected, or how flags would affect their academic standing. They were told, functionally, that this was how exams worked now.
The documented racial disparities in these systems were significant. Facial recognition components in proctoring software consistently performed worse on students with darker skin tones, flagging them at higher rates for behaviors the system could not accurately read. Students with disabilities, students in noisy or crowded living situations, students whose home environments did not match the assumptions built into the algorithm — all bore disproportionate cost. Some were referred to academic integrity proceedings based on flags generated by software that has never been independently validated for accuracy or fairness.
In most cases, the consent infrastructure was identical to what AJL documented at TSA checkpoints. A policy existed. Students were required to agree to terms of service to access their own coursework. That agreement was not meaningful consent. It was a condition of participation that carried no real alternative.
As a law school administrator, I think about this in a specific way. We are training the people who will write AI governance policy, argue AI cases before courts, and advise institutions on AI compliance for the next fifty years. If they first encounter AI governance as students, in a system where their data is harvested without explanation, their behavior is surveilled without transparency, and their ability to object is structurally foreclosed, that is the governance model they will normalize. What we model in higher education does not stay in higher education.
The same dynamic is playing out in student conduct systems, where AI tools are being used to flag social media posts and predict behavioral risk. In admissions, where algorithmic scoring has already produced documented disparities and resulted in federal scrutiny. In faculty hiring, where AI resume screening tools carry forward the biases embedded in the data they were trained on. Across all of these contexts, the pattern is consistent: the technology is deployed before the governance infrastructure exists, the people most affected have the least voice in the decision, and accountability mechanisms are built, if at all, after harm has already occurred.
There is a line in the AJL report that I keep coming back to.
The authors note that as facial recognition becomes normalized in airports, its expansion into other contexts — government buildings, public transit, schools — becomes less likely to be successfully challenged. The legal standard for privacy in the United States depends in part on what society considers a "reasonable expectation." Normalize enough surveillance, and the reasonable expectation shifts. Not because of a legislative choice or a judicial ruling. Because of the quiet accumulation of encounters with technology that people did not know they could refuse.
This is how surveillance infrastructure becomes permanent. And universities are participating in that normalization whether they intend to or not.
The EU AI Act prohibits the live use of general facial recognition in public spaces, concluding it is inherently discriminatory and invasive. Some U.S. cities and states have moved to limit its use. But higher education has no equivalent framework. Individual institutions make individual decisions, largely outside of public view, with minimal input from the students and faculty most affected. And the organizations that accredit those institutions have not yet treated AI governance as a core accreditation question.
That needs to change.
The AJL report recommends halting TSA's use of facial recognition to allow for genuine public deliberation. That recommendation is worth taking seriously — not because the technology is inherently illegitimate, but because the conditions for its legitimate use have not been met.
Those conditions are not primarily technical. The algorithmic accuracy questions are real and unresolved, but they are not the core of what the AJL survey documents. The core is institutional: the gap between what policy says and what people experience; the absence of a meaningful opt-out in practice; the lack of accountability when those who invoke their rights are met with hostility; the removal of oversight mechanisms without public notice.
Closing that gap requires treating public consent as a genuine governance question — one that belongs upstream of deployment rather than downstream of it. It requires asking not just whether a technology works, but whether the people most likely to be harmed by its errors have had a real voice in deciding whether it should be used at all.
In higher education specifically, this means faculty governance bodies engaging AI deployment decisions before contracts are signed, not after. It means students having access to independent channels to raise concerns about algorithmic systems affecting their academic standing. It means institutions publishing clear, plain-language disclosures about what AI tools are in use, what data they collect, and what recourse exists when the system is wrong.
None of that is technically complicated. All of it is institutionally inconvenient. And that inconvenience is exactly where equity work lives.
The institutions getting this right will be the ones that already know, from decades of civil rights and inclusion practice, that governance is not a compliance exercise. It is a trust exercise. And trust, once lost to a population that never consented to the terms, is very hard to rebuild.
It is not too late to ask the right questions. But in higher education, as at the airport, the window for doing so before the infrastructure becomes too entrenched to challenge is closing faster than most institutions realize.
Anjali writes and speaks on AI governance, institutional accountability, and equity in higher education. If you're interested in bringing her to your organization, conference, or leadership team, she'd love to hear from you.
Get in touch →Chief Diversity Officer at Georgetown University Law Center. Attorney. Author of Humanity at Work (#1 Amazon Bestseller). TEDx Speaker. She writes and speaks at the intersection of AI governance, civil discourse, and institutional trust. Follow on X →
Views expressed are her own and do not represent any employer or institution.