Congress Thinks Diverse Hiring Can Stop Tech’s Racist Bias. History Says They’re Wrong.

Discrimination endures long after minority employees join powerful institutions.

Scott Carson/ZUMA Wire

For indispensable reporting on the coronavirus crisis, the election, and more, subscribe to the Mother Jones Daily newsletter.

On Wednesday, Congress held a hearing on diversity in the technology industry to address the sector’s abysmal statistics on inclusion. Lawmakers and expert witnesses touted increasing diversity as a necessary tool to reduce the potential harmful biases being coded into technology. 

“The industry’s workforce has remained mostly homogenous. People of color, women, and older Americans have all been notably absent from the tech workforce,” said Rep. Jan Schakowsky (D-Ill.), who chairs the House Energy and Commerce Committee’s consumer protection subcommittee, during her opening remarks. “The technology itself reflects that lack of diversity. That has real impact against Americans. We’ve seen algorithms’ bias in sentencing guidelines resulting in harsher sentences for minorities,” she continued, expressing a sentiment shared by many lawmakers during the hearing—fix diversity and you can fix many of the structural ills of technology that have been baked into its algorithms.

While ensuring more diverse representation in the workforce is an important step toward making technology more equitable, research and track records elsewhere suggest it is not a panacea in rooting out inequities, especially at big organizations in high-impact, complex fields. 

More diverse hiring practices have increased minority participation in police forces, but research suggests diverse departments do little to reduce law enforcement’s disparate racial impact; studies have shown they fail to reduce the disproportionately high amount of lethal force wielded in black communities and are not better at addressing the needs of black communities.

Policing isn’t the only area where better representation of a group hasn’t alleviated mistreatment of that group. More diverse educators have not closed the racial education inequity gap. The prestigious and clubby financial services industry, which has a comparatively better diversity record, is sometimes touted as a model for tech to follow. But those employees didn’t stop the industry from targeting black families with sub-prime mortgages. President Barack Obama’s race didn’t prevent his administration from pursuing housing policies that were detrimental to black wealth. Big tech doesn’t have to look far for similar examples: IBM has one of the oldest workforces among major technology companies, but that hasn’t kept it from being sued for age discrimination after firing thousands of its eldest employees.

Lack of improvement isn’t the fault of minorities working within these organizations, but is likely caused by societal and organizational factors that remain impervious to changes in personnel. As Charles E. Menifield and Geiguen Shin of Rutgers University, along with Princeton University’s Logan Strother, wrote in a paper on race and policing, improvements should focus on “fundamental macro-level policy changes, as well as changes to meso-level organizational practices” that are better equipped than hiring practices “to address the root causes of racial disparities.”  The tech sector is likely no different.

While it is possible that having more minorities at Facebook could have helped the company avoid creating tools that enabled advertisers to discriminate by race with housing ads—which it now says it has fixed—the social-media giant faces a host of problems that are reflective of real-world, systemic racial bias, and where it is less obvious how a more diverse workforce would guarantee change.

The company still makes it possible for advertisers to discriminate against groups of users by targeting ads to proxies for race like zip code, estimated home value, and estimated income. It’s hard to imagine how more diverse engineers would improve predictive policing algorithms when the crime datasets they’d be built with are usually biased. Facial recognition software is, for now, far more accurate at identifying white faces, which could make darker-skinned people more vulnerable to falsely falling under police suspicion. But even if that weren’t the case, the software wouldn’t address already existing issues of bias in the police forces set to use the software. Such limits in reducing bias make representation more of a first step than the solution lawmakers on the energy and commerce committee want it to be. 

DOES IT FEEL LIKE POLITICS IS AT A BREAKING POINT?

Headshot of Editor in Chief of Mother Jones, Clara Jeffery

It sure feels that way to me, and here at Mother Jones, we’ve been thinking a lot about what journalism needs to do differently, and how we can have the biggest impact.

We kept coming back to one word: corruption. Democracy and the rule of law being undermined by those with wealth and power for their own gain. So we're launching an ambitious Mother Jones Corruption Project to do deep, time-intensive reporting on systemic corruption, and asking the MoJo community to help crowdfund it.

We aim to hire, build a team, and give them the time and space needed to understand how we got here and how we might get out. We want to dig into the forces and decisions that have allowed massive conflicts of interest, influence peddling, and win-at-all-costs politics to flourish.

It's unlike anything we've done, and we have seed funding to get started, but we're looking to raise $500,000 from readers by July when we'll be making key budgeting decisions—and the more resources we have by then, the deeper we can dig. If our plan sounds good to you, please help kickstart it with a tax-deductible donation today.

Thanks for reading—whether or not you can pitch in today, or ever, I'm glad you're with us.

Signed by Clara Jeffery

Clara Jeffery, Editor-in-Chief

payment methods

We Recommend

Latest