As we began building more intelligent products at TXI - systems that learn from data, make predictions, and automate judgment, I found myself asking a question I didn't have a good answer to: How do we make sure we're not building bias in? I wanted to go deep on that question with someone who had spent their career at exactly that intersection. So I reached out to Dr. Ruha Benjamin and invited her to spend time with our team.
Dr. Benjamin is the Alexander Stewart 1886 Professor of African American Studies at Princeton University, founder of the Ida B. Wells Just Data Lab, and one of the most important thinkers working on the relationship between technology and racial equity today. She is the author of four books, including Race After Technology: Abolitionist Tools for the New Jim Code, a landmark work on how automated systems can reproduce and intensify historical discrimination and most recently Imagination: A Manifesto. In 2024, she was awarded a MacArthur Foundation "Genius" Fellowship. Her work doesn't just diagnose the problem. It gives us tools to build differently.
What she taught us went far deeper than a bias checklist. It was a framework for unlearning, and it started with a story.
In 2019, Google launched the Pixel 4 with a promise: this phone would work for everyone. Researchers had just shown that facial recognition systems struggled to accurately detect darker skin tones. So Google did what seemed responsible, they decided to diversify their training data. They hired contractors to approach Black people in Atlanta, Los Angeles, and other cities, offering $5 Starbucks gift cards to play a "selfie game."
On the surface, it looked like progress. Good intentions. Real action.
But those contractors were instructed to target people who were unhoused, in shelters, on the streets. They were told to withhold what the facial data would actually be used for. And the people who participated? They were helping build a system that would ultimately be used to surveil and exclude them from the very public spaces where they lived.
As Dr. Ruha Benjamin put it:
"They were being invited to facilitate their own exclusion."
I've thought about that story a lot since Dr. Benjamin joined our Q1 all-hands. Not because it's a story about Google. But because it's a story about what happens when good intentions meet unexamined assumptions, and how the gap between those two things can cause real harm at scale.
At TXI, we're in the middle of our own transformation: evolving from a digital product consultancy into an intelligent product company, one that builds systems capable of learning from data, making predictions, and improving over time. That's an exciting trajectory. It's also a dangerous one if we're not honest about what we're building on.
The data sets we use, the models we train, and the recommendations our systems generate will either perpetuate historical inequities or help dismantle them. The question isn't whether our systems will carry bias. The question is whether we'll have the honesty and discipline to find it before it harms someone.
Unlearning intelligence itself
Dr. Benjamin opened with a provocation I didn't expect: that before we can build intelligent products responsibly, we need to question what we even mean by intelligence.
"We need to unlearn intelligence as merely smartness, as quick thinking, clever computation, mental agility divorced from empathy and ethics," she told us.
At TXI, we've been defining intelligent applications around three questions: Does it learn? Does it decide or predict? Does it get better over time? Those are the right questions for capability. But Dr. Benjamin added a fourth that we hadn't been asking: Does it reproduce or challenge existing power structures?
That question reframes everything. When we measure the intelligence of a system only by its accuracy or speed, we're measuring it for the people it was built to serve, and often missing entirely who it was built to fail.
As she put it: "Computational depth without the social and historical really ain't that deep."
We're not just learning bias. We're scaling it.
The examples Dr. Benjamin shared weren't edge cases. They're patterns.
A language model given identical criminal cases, written once in Standard American English, once in African American English, recommended the death penalty more often for the AAE version. It also consistently assigned more negative character traits and lower-status jobs to AAE speakers. The model wasn't told anything about race. It learned associations from the internet, which carries centuries of racial bias embedded in its text.
A widely-used healthcare algorithm rated Black patients as healthier than equally sick white patients, because it used historical spending as a proxy for health need. Since Black patients receive less care due to systemic barriers, they had lower spending records. The algorithm didn't intend to discriminate. It just automated the discrimination that was already there.
Facial recognition systems are 35% less accurate at identifying Black women than white men, a disparity that has led to wrongful arrests. Not because anyone designed them to fail, but because the training data was skewed from the start.
What each of these has in common: no malice. Just unexamined assumptions, scaled.
The bench
To help us see what unexamined assumptions actually look like in the world, not just in models and datasets, but in the physical structures we've built around us, Dr. Benjamin showed us a photograph of a park bench in Berkeley, California.
It had armrests at regular intervals. Reasonable design, they help people sit, provide comfort, create personal space between strangers. But they also prevent anyone from lying down.
She didn't need to say more for most of us to understand: that bench wasn't designed for everyone. It was designed to exclude, specifically, to make public space inhospitable to people experiencing homelessness. The armrests are what architects call hostile architecture. The hostility is built into the structure, invisible to anyone the design wasn't meant to harm.
"If the bench was built for you," she said, "you might not be able to detect it at first."