When the Algorithm Misgenders You: A Personal Reflection on AI and Identity

Published on October 30, 2025 at 12:54 PM

Last year, a chatbot corrected my gender, and it wasn’t a human behind the screen. It was an algorithm trained on millions of data points that decided “they/them” was an error.

My research explores these micro-moments of erasure and how they reveal something deeper about who gets to define normality in technology. AI systems don’t only reflect our biases, they enforce them, line by line of code.

The first time a chatbot corrected my pronouns, I laughed. The second time, I stopped.

I had written an email draft using an AI-powered writing assistant. Every instance of they/them was automatically changed to she/her. When I set the language to “inclusive English,” the program politely highlighted my pronouns in red, suggesting they were “grammatically inconsistent.”

As a gender researcher, I study algorithmic bias. As a non-binary person, I live it.

Most people think bias in artificial intelligence is a technical glitch — a problem of bad data or incomplete datasets. But I’ve learned it’s much deeper: it’s an epistemological bias, built into the assumptions of the systems themselves. Who counts as a “normal” user? Whose body, name, or voice fits the model?

When we feed machines historical data, we also feed them the hierarchies of the past. And when those machines are used to filter job applications, moderate content, or personalize ads, they quietly reproduce those hierarchies at scale.

I often ask my students to imagine an algorithm as a mirror — but a mirror that reflects only what it has already seen. When it encounters something new — a different pronoun, a darker skin tone, a nonbinary voice — it glitches. It refuses to see.

This is why feminist and queer approaches to technology are not optional add-ons; they’re urgent acts of repair.

We need engineers who understand social theory, policymakers who grasp what intersectionality means in practice, and users who demand transparency. Above all, we need to design systems that don’t just recognize difference, but are built for difference.

Because no one should have to explain their existence to a machine — or to the people who program it.