Skip to Main Content

Deep unlearning

What Dr. Ruha Benjamin taught us about building intelligent products

As we began building more intelligent products at TXI - systems that learn from data, make predictions, and automate judgment, I found myself asking a question I didn't have a good answer to: How do we make sure we're not building bias in? I wanted to go deep on that question with someone who had spent their career at exactly that intersection. So I reached out to Dr. Ruha Benjamin and invited her to spend time with our team.

Dr. Benjamin is the Alexander Stewart 1886 Professor of African American Studies at Princeton University, founder of the Ida B. Wells Just Data Lab, and one of the most important thinkers working on the relationship between technology and racial equity today. She is the author of four books, including Race After Technology: Abolitionist Tools for the New Jim Code, a landmark work on how automated systems can reproduce and intensify historical discrimination and most recently Imagination: A Manifesto. In 2024, she was awarded a MacArthur Foundation "Genius" Fellowship. Her work doesn't just diagnose the problem. It gives us tools to build differently.

What she taught us went far deeper than a bias checklist. It was a framework for unlearning, and it started with a story.

In 2019, Google launched the Pixel 4 with a promise: this phone would work for everyone. Researchers had just shown that facial recognition systems struggled to accurately detect darker skin tones. So Google did what seemed responsible, they decided to diversify their training data. They hired contractors to approach Black people in Atlanta, Los Angeles, and other cities, offering $5 Starbucks gift cards to play a "selfie game."

On the surface, it looked like progress. Good intentions. Real action.

But those contractors were instructed to target people who were unhoused, in shelters, on the streets. They were told to withhold what the facial data would actually be used for. And the people who participated? They were helping build a system that would ultimately be used to surveil and exclude them from the very public spaces where they lived.

As Dr. Ruha Benjamin put it:

"They were being invited to facilitate their own exclusion."


I've thought about that story a lot since Dr. Benjamin joined our Q1 all-hands. Not because it's a story about Google. But because it's a story about what happens when good intentions meet unexamined assumptions, and how the gap between those two things can cause real harm at scale.

At TXI, we're in the middle of our own transformation: evolving from a digital product consultancy into an intelligent product company, one that builds systems capable of learning from data, making predictions, and improving over time. That's an exciting trajectory. It's also a dangerous one if we're not honest about what we're building on.

The data sets we use, the models we train, and the recommendations our systems generate will either perpetuate historical inequities or help dismantle them. The question isn't whether our systems will carry bias. The question is whether we'll have the honesty and discipline to find it before it harms someone.

Unlearning intelligence itself

Dr. Benjamin opened with a provocation I didn't expect: that before we can build intelligent products responsibly, we need to question what we even mean by intelligence.

"We need to unlearn intelligence as merely smartness, as quick thinking, clever computation, mental agility divorced from empathy and ethics," she told us.

At TXI, we've been defining intelligent applications around three questions: Does it learn? Does it decide or predict? Does it get better over time? Those are the right questions for capability. But Dr. Benjamin added a fourth that we hadn't been asking: Does it reproduce or challenge existing power structures?

That question reframes everything. When we measure the intelligence of a system only by its accuracy or speed, we're measuring it for the people it was built to serve, and often missing entirely who it was built to fail.

As she put it: "Computational depth without the social and historical really ain't that deep."

We're not just learning bias. We're scaling it.

The examples Dr. Benjamin shared weren't edge cases. They're patterns.

A language model given identical criminal cases, written once in Standard American English, once in African American English, recommended the death penalty more often for the AAE version. It also consistently assigned more negative character traits and lower-status jobs to AAE speakers. The model wasn't told anything about race. It learned associations from the internet, which carries centuries of racial bias embedded in its text.

A widely-used healthcare algorithm rated Black patients as healthier than equally sick white patients, because it used historical spending as a proxy for health need. Since Black patients receive less care due to systemic barriers, they had lower spending records. The algorithm didn't intend to discriminate. It just automated the discrimination that was already there.

Facial recognition systems are 35% less accurate at identifying Black women than white men, a disparity that has led to wrongful arrests. Not because anyone designed them to fail, but because the training data was skewed from the start.

What each of these has in common: no malice. Just unexamined assumptions, scaled.

The bench

To help us see what unexamined assumptions actually look like in the world, not just in models and datasets, but in the physical structures we've built around us, Dr. Benjamin showed us a photograph of a park bench in Berkeley, California.

It had armrests at regular intervals. Reasonable design, they help people sit, provide comfort, create personal space between strangers. But they also prevent anyone from lying down.

She didn't need to say more for most of us to understand: that bench wasn't designed for everyone. It was designed to exclude, specifically, to make public space inhospitable to people experiencing homelessness. The armrests are what architects call hostile architecture. The hostility is built into the structure, invisible to anyone the design wasn't meant to harm.

"If the bench was built for you," she said, "you might not be able to detect it at first."

This is the framework she gave us, and the one I keep returning to. There are obvious spikes: a bench from South Africa with a sign that reads "Europeans Only." Explicit. Unmistakable. But the subtler harm is the bench that looks welcoming, where the exclusion is structural, invisible to those it doesn't touch.

She walked us through what that shift looks like across the domains closest to our work.

In the workplace, the obvious spike is a "No Blacks Need Apply" sign in a window. The subtle version? A hiring algorithm trained on decades of a skewed workforce. No malice required. As she put it: "The more intelligent your system becomes, the more racist and sexist it becomes", because intelligence, unchecked, just gets better at reproducing what already exists. Amazon learned this the hard way when they quietly scrapped an AI recruiting tool that had been trained on ten years of their own hiring data, data that reflected a historically male-dominated tech industry. The system taught itself that male candidates were preferable, and began penalizing résumés that included the word "women's", as in "women's chess club", while favoring language patterns more common in male applicants. The algorithm wasn't told to discriminate. It just got very good at learning from a discriminatory past.

In education, the obvious spike is a Yale study where teachers with eye-tracking technology were asked to watch children play and look for "challenging behavior", and their gaze went disproportionately to the Black children in the room, even though all the children were behaving identically. The subtle version? Ed-tech tools that predict student success and flag Black students as "high risk of not graduating" at four times the rate of their white peers. The discrimination is the same. The mechanism is invisible.

In healthcare, Dr. Benjamin made it personal, and unforgettable. She described her grandmother's generation walking up to a hospital and seeing a "whites only" sign. The spike was right there in her face. No interpretation needed.

"Now," she continued, "I can walk through the front door of the hospital and there very well may be a healthcare algorithm making decisions about my care that I don't even know about, allocating resources, directing me in certain ways, and the person administering that doesn't have to have any malice in their heart. They just need to be following directions."

Same harm. Different architecture. That's what she means by infrastructural harm, it's not in anyone's intent. It's built into the structure of the technology itself.

The question for every product we build: Where are our armrests?

Questioning the questions

One of the sharpest moments in Dr. Benjamin's talk came when she turned the lens not on the technology, but on us, the people asking the questions.

She pointed to ed-tech tools that flag students as "high risk of not graduating", tools that, when audited, labeled Black students as high-risk at four times the rate of their white peers. The instinct is to say: let's make the model more accurate. But she pushed further: "


"Why are we collecting data on the most vulnerable instead of looking at who is producing the risk? Why isn't data ever collected on those with the power to shape the context in which young people are navigating?"

We rarely collect data on the powerful because the powerful are our clients. They give us the question, and we follow their lead.

That landed hard for me. Human-centered design has always been TXI's foundation, we design for real people, not just users. But Dr. Benjamin's challenge goes deeper: it's not enough to be human-centered if we haven't asked whose humanity is centered. Who defined the problem? Who benefits from the solution? Who bears the risk if it fails?

Building the capacity to see

Diverse teams aren't a nice-to-have. They're how you build the capacity to perceive harm that others can't see.

Someone who has never been profiled by police might design a "community safety" app that increases surveillance. Someone who has never been denied healthcare might design an algorithm that gates access. The harm isn't visible to them, not because they're careless, but because the bench was built for them.

Dr. Benjamin gave us a practice she calls a spike analysis, a discipline of asking, before we build, where the armrests might be hiding. For our designers, that means questioning whose experience defines "normal" in our research. Universal design often means design that works for the people already in the room, and user research that doesn't include people who could be harmed by a product will consistently miss the harm. For our engineers, it means treating training data not as neutral raw material but as a historical artifact, asking what inequities were baked in before we ever touched it, and what patterns we might be amplifying without knowing it. And for those of us shaping product strategy, it means asking not just "does this work?" but "for whom does it work, and who bears the cost when it doesn't?"

The shift Dr. Benjamin is asking for isn't really about process. It's about orientation, moving from "did we intend harm?" to "what harm have we built in, and how would we even know?" Those are different questions. They require different habits, different team compositions, and a different willingness to slow down and look beneath the surface.

As Dr. Benjamin put it: "Not all speed is movement."

This isn't just a TXI question

When I decided to bring Dr. Benjamin in, I realized quickly that the challenges she'd be raising weren't unique to TXI. Every software company building intelligent products is wrestling with the same questions, whether they're asking them out loud or not. So I reached out to peers at several other leading software companies and invited them to join our all-hands and hear her talk directly.

Because the honest truth is: we can't build our way out of this problem company by company. The patterns of bias embedded in our industry's data, tools, and teams are systemic, and that means the unlearning has to be too. If more of us are asking these questions, challenging our assumptions, and holding each other accountable, we have a better chance of actually building something worthy.

That's also why we're publishing this piece. Not to signal that TXI has figured it out. But to add our voice to a conversation the entire industry needs to be having more loudly.

What I'm still sitting with

I'll be honest: Dr. Benjamin's talk was uncomfortable in the way that good thinking usually is.

TXI works in healthcare and industrial sectors, exactly the domains where algorithmic bias has caused documented harm. We're building systems that will make recommendations affecting real people's lives. And we're doing it as a mostly white organization, which means we have our own version of the bench problem: there are harms we simply cannot see from where we sit.

That's not a reason to stop building. But it is a reason to build differently, with more humility about what we might be missing, more intentionality about who's in the room, and more willingness to question the questions before we answer them.

As Dr. Benjamin reminded us: "Not all speed is movement."

Let's build systems worthy of the power they'll wield over people's lives.

Want to go deeper?

About the author

Mark Rickmeier is CEO of TXI, a 100% employee-owned digital consultancy specializing in custom software solutions for data-driven transformation. Under his leadership, TXI transitioned to an Employee Stock Ownership Plan (ESOP) in 2022, fundamentally restructuring how the company creates and distributes value across its team of strategists, designers, and developers spanning four countries. Beyond TXI, Mark founded the Kermit Collective, fostering collaboration among consulting firms, and serves on the Forbes Business Council as Business Innovation Chair and Fast Company Executive Board as Company Culture Chair. His commitment to building inclusive, employee-owned organizations stems from a belief that diverse perspectives and equitable structures don't just drive better business outcomes—they're essential for creating

Published by Mark Rickmeier in DEIB

Let's start a conversation

Find out how intelligent solutions can accelerate growth for your organization.