Misgendered by Machines: Inside AI’s Discrimination Against Trans People and the Solutions We Need


Even as artificial intelligence is marketed as neutral and objective, it’s already reproducing and intensifying discrimination against trans and nonbinary people. Across sectors, systems trained on rigid, binary labels misread gender‑nonconforming faces, voices, and expressions, turning everyday interactions into sites of systemic harm.
Facial recognition and automated gender‑classification tools, built without inclusive datasets, register higher error rates for anyone who doesn’t conform to cisnormative expectations, especially people of color and disabled trans people. These tools don’t just misclassify; they feed into policing, border control, and corporate surveillance, raising profound data privacy concerns for communities already criminalized and surveilled.
On social platforms, algorithmic moderation and profiling routinely misgender users by inferring gender from names, pronouns, or appearance. When systems override self‑identification, they expose legal names, deadnames, or old photos, enabling forced outing and targeted harassment. What platforms frame as personalization often functions as automated identity erasure, with little transparency or consent.
Generative AI compounds this harm. Trained on biased media and web corpora, image and text models frequently render trans and nonbinary people as hypersexualized, deceptive, or tragic. These outputs don’t remain abstract; they circulate as memes, deepfakes, and abusive content that escalate harassment, employment risks, and physical danger.
In healthcare and employment, AI screening and decision tools inherit historical transphobia and racism from clinical notes, hiring records, and insurance data. They can flag gender‑affirming care as anomalous, deny services, or downgrade candidates whose records reflect gender‑affirmation‑related gaps, all without meaningful opt‑out.
These patterns explain why trans communities report more negative attitudes toward AI in surveys: the technology consistently encodes their disposability. Justice‑oriented solutions require trans leadership in design, strict limits on biometric surveillance, enforceable data privacy protections, and governance that mandates opt‑out rights, impact assessments, and reparative accountability when systems cause harm.
News and AdvocacyApril 28, 2026Trump Admin Moves to Remove LGBTQ+ Housing Protections, Reshape Shelter Policy
Activism and ChangeApril 26, 2026What Is Queerbaiting?
News and AdvocacyApril 26, 2026The Court’s Latest LGBTQ+ Case Could Reshape Who Gets Included in Preschool
News and AdvocacyApril 23, 2026Democrats Pass New Law Shielding Trans Minors’ Name-Change Records