AI · Higher Education · Equity
Essay · Admissions · AI · Higher Education

AI Is Filtering Itself. What Happens to the College Essay Now?

Students are using AI to write their essays. Admissions offices are using AI to read them. The student writing honestly from the heart may be the one who gets screened out.

By Anjali Bindra Patel

I notice misplaced apostrophes on billboards. I have opinions about the Oxford comma. So when I use em dashes and I use a lot of them it is on purpose. I was doing it before the robots got here, and I will be doing it after.

The college essay was never a perfect tool. But it was trying to solve a real problem: how do you tell one highly qualified applicant from another when their transcripts, test scores, and extracurriculars are nearly identical? The answer admissions offices settled on was voice. Personal narrative. The thing that could not be gamed because it was supposed to come from somewhere genuinely individual.

That premise is now under serious pressure. And the pressure is coming from both ends of the process simultaneously.

Students are using AI to write their essays. Admissions offices, overwhelmed by application volumes that have doubled and tripled as students apply to fifteen or twenty schools at a time, are beginning to use AI to screen them. Which means, in a growing number of cases, AI is writing for an audience of AI. And the student who sat down and wrote something honest, imperfect, and genuinely their own may be the one who gets filtered out because their voice does not match what the algorithm was trained to reward.

This is not a hypothetical future problem. It is the admissions landscape right now.

The Gaming Problem

The data on how many students are using AI for admissions essays varies by methodology and population. A 2025 Carnegie survey of more than 3,400 prospective students found that 14 percent of graduating seniors reported using AI to prepare entrance materials, up from effectively zero in 2023. A separate December 2024 ScholarshipOwl survey of more than 8,500 students on a scholarship platform found higher rates among that population, with a majority of Gen Z respondents reporting AI use for admissions and scholarship materials. The divergence in these figures is itself instructive: students with greater access to information about AI tools are adopting them faster. Meanwhile, a 2025 Kaplan survey of more than 200 admissions officers found that 68 percent of colleges have no official policy at all on applicants using AI to write essays. The technology is moving. The governance is not.

Once students understand that an AI is reading their essays, the incentive to write authentically collapses. If the audience is an algorithm, you write for the algorithm. You learn what it rewards: polished structure, certain vocabulary, particular narrative arcs. You use AI to generate the essay, then use AI to optimize it for the screening tools on the other end.

Some are gaming the system more aggressively than that. Duke University eliminated numerical essay ratings entirely in 2023-24, citing the rise of generative AI as a direct factor, acknowledging that the tools had made scoring unreliable. Students and the consultants who advise them took note. Services specifically marketing themselves as AI "humanizers" have proliferated, designed to take AI-generated text and rewrite it to evade detection tools. Common App forbids the use of generative AI but acknowledges it does not check individual essays unless someone files a report of suspected fraud, which means the policy is effectively unenforceable at scale. And research has found that writing by non-native English speakers is far more likely to be flagged by AI detection tools than writing by native speakers, meaning the tools being deployed to catch cheating are systematically penalizing the students who were never cheating in the first place.

The students most likely to navigate all of this successfully are the ones with access to information, resources, and the networks to learn what works. Which means the gaming of AI admissions screening will follow the same pattern as every other admissions advantage: it will widen the gap between students who have support and students who do not.

I am watching this not only as someone who works in higher education and thinks about equity for a living, but as a mother whose children are approaching college age. My kids are writing their own essays. But I am acutely aware that other families are making different calculations, not necessarily out of dishonesty, but out of a reasonable reading of a system that has signaled, loudly, that it is no longer really listening. When the reader is an algorithm, the moral case for authentic writing gets harder to make to a seventeen-year-old who wants to get in.

The student writing honestly from the heart is now at a structural disadvantage. Their authentic voice may not score as well as a polished AI-generated narrative optimized for algorithmic approval.

The Differentiation Fiction

Here is the harder truth underneath the AI question, one that higher education has been reluctant to say out loud: past a certain threshold, most of the students in a selective applicant pool are genuinely qualified. The research bears this out. A 2019 study published in Educational Researcher found that at highly selective institutions, the academic credentials of admitted and rejected students were statistically indistinguishable above a baseline threshold. Sociologist Mitchell Stevens, in his ethnography of a selective liberal arts admissions office, documented how much of the final decision came down to institutional priorities, gut instinct, and the particular readers who happened to review a file on a given day.

The difference between the student who gets in and the student who does not is often not a reliable signal of who will contribute more to the community, succeed more fully in the program, or go on to do more with the degree. It is frequently a signal of who had better preparation, better coaching, and better access to the informal knowledge of how selective admissions works.

The college essay was supposed to cut through that by surfacing the individual behind the transcript. But the individual behind the transcript was always going to be shaped by who had writing coaches, who had parents who went to college and could model what a strong essay looked like, who had the luxury of time to revise. A first-generation student working thirty hours a week writes a different kind of essay than a student with a private college counselor, two rounds of feedback, and a polished final draft. The essay was never as meritocratic as it pretended to be. AI did not create that inequity. It industrialized it.

What Might Actually Work

If the essay is no longer doing what it was designed to do, the question is what might. A few possibilities worth taking seriously, not as complete solutions, but as more honest attempts to get at what admissions is actually trying to learn.

Short video submissions are worth exploring. An unscripted response tells you things a written essay cannot. You hear how someone thinks in real time. You see how they handle a question they have not rehearsed. But video carries its own equity problems. A student in a quiet house with good lighting and a new phone presents very differently than a student sharing a room with younger siblings, caring for a family member, or recording from a space that was never designed for performance. Video rewards the conditions of a student's home environment in ways that have nothing to do with their intelligence or potential.

Which is why I keep coming back to something simpler. The phone call. Not a video call. A phone call. The distinction matters more than it might seem, and it matters most for the students who need the most equitable shot.

On a phone call, everyone's voice sounds roughly like everyone's voice. The student in a crowded apartment and the student in a quiet suburb are, for ten minutes, on genuinely comparable footing. What comes through is what they say and how they say it, not the backdrop, the lighting, or the quality of their equipment. No one is penalized for a noisy household or a cracked screen.

A legitimate counterargument: unscripted phone calls can disadvantage students with phone anxiety, certain communication disabilities, or limited English fluency. That concern is real and should shape implementation. A phone call offered as one option among several, with accommodation available upon request, is very different from a phone call as the only pathway. The point is not to replace every other tool. It is to restore the phone call as a serious option, particularly for students who are already disadvantaged by every written format that rewards coaching and polish over authenticity.

What comes through in a real phone conversation is irreplaceable. You hear hesitation and confidence. You hear what someone reaches for when they are not performing. You hear whether a person is curious, whether they listen, whether they can hold a thought under mild pressure. These are not things an algorithm can score. They are, arguably, exactly what a college education is supposed to develop and reward.

Not every technological advance is an improvement. Sometimes what we called inefficiency was doing real work, the slow and irreplaceable work of one person actually paying attention to another. The ten-minute phone call is not a relic. It is a tool that has been undervalued precisely because it cannot be automated. It requires a human on each end. And that, right now, is its greatest strength.

The objection will be that this does not scale. But we should be precise about what that means. It means it costs more time. It means it requires human attention that we have decided we cannot afford. And yet we are spending significant institutional resources building and maintaining AI screening systems that are demonstrably less accurate at assessing human potential than a trained reader would be. We are paying for efficiency and getting inequity. The phone call is slower. It is also better. For the students most likely to be failed by every other part of this system, it may be the only tool that actually sees them.

When Differentiation Is Not Possible: The Case for the Lottery

For the cases where none of this produces a clear signal, where the pool of genuinely qualified candidates is large and the honest differences between them are small, institutions should seriously consider randomization. The lottery model, in which all applicants who meet a defined academic threshold are entered into a random selection pool, has been proposed by admissions researchers including Barry Schwartz and has been quietly piloted at some institutions for waitlist decisions.

A fair objection: admissions is not only about academic credentials. Institutions have legitimate interests in building classes with geographic diversity, first-generation representation, specific talents, and other qualitative factors that a threshold-plus-lottery model would not automatically capture. That objection is well-taken, and a lottery is not a complete solution. What it is, however, is an honest tool for the final stage of selection among candidates who are genuinely equivalent on every dimension that can be meaningfully assessed. The resistance to it is not primarily methodological. It is cultural. We have built an entire industry around the idea that admissions is a precise meritocratic sorting mechanism. Acknowledging that the final cut is largely arbitrary threatens that story. But AI is making that story increasingly difficult to maintain, and a lottery is more honest about what the process actually knows and does not know.

What Institutions Should Actually Do

Diagnosis without prescription is not enough, so let me be direct. Institutions that want to address this gap operationally have at least four concrete moves available to them now, before any technology changes.

Publish a clear policy. The 68 percent of institutions with no official AI essay policy are not neutral. They are permissive by default, in ways that advantage students who know the implicit rules and disadvantage those who do not. A clear policy, whatever its content, is more equitable than strategic ambiguity.

Redesign the prompt. Essays written in response to highly specific, unusual, or personal prompts are harder to AI-generate convincingly. "Describe a time you changed your mind about something important and what it cost you" produces more authentic responses than "tell us about a challenge you overcame." Prompt design is a tool admissions offices already have and are underusing.

Audit detection tools before deploying them. Research has found that AI detection tools flag non-native English speakers at disproportionate rates. Institutions deploying these tools without auditing them for disparate impact are adding a new layer of inequity to a system that already has too many. Training human readers to recognize authentic voice is a more equitable investment than outsourcing that judgment to a tool that cannot do it reliably.

Pilot alternatives at scale. Phone calls, short video responses, and low-stakes asynchronous conversations are not theoretical. Some institutions are already experimenting with them. The results should be shared publicly so the field can learn faster than it currently is.

The Deeper Question

The college essay crisis is a specific version of a problem showing up everywhere AI gets inserted into high-stakes human decisions. The system optimizes for what it can measure, and what it can measure is not the same as what actually matters.

What admissions is actually trying to assess, curiosity, resilience, the capacity to contribute to a community, the kind of person someone is becoming, does not compress cleanly into a 650-word essay that an algorithm can score. It never did. But when a human reader was on the other end, something real could still get through. A sentence that did not follow the formula. A story that did not resolve neatly. A voice that was clearly, imperfectly, someone's own.

The student who writes that way deserves a reader who can recognize what they have done. A system in which AI writes for AI, and the authentic voice is the liability, has lost the thread entirely. And the students it will lose most reliably are the ones it could least afford to miss.

Speaking & Consulting

Anjali writes and speaks on AI, equity, and institutional accountability in higher education. If you're interested in bringing her to your organization or event, she'd love to hear from you.

Get in touch →

Anjali Bindra Patel

Chief Diversity Officer at Georgetown University Law Center. Attorney. Author of Humanity at Work (#1 Amazon Bestseller). TEDx Speaker. She writes and speaks at the intersection of AI governance, civil discourse, and institutional trust. Follow on X →

Views expressed are her own and do not represent any employer or institution.

More Essays

AI Governance · Civil Liberties

The Consent Gap: What AI Governance Is Still Getting Wrong

AI · Employment · Equity

The Rejection You Never Saw Coming

AI · Civil Liberties

The Silent Gatekeeper: What AI Is Doing to Who Gets Heard

View All Articles →