How Science Could Help Prevent Police Shootings

Can data predict which cops are most likely to mishbehave in the future?

<a href="http://www.shutterstock.com/gallery-940660p1.html?cr=00&pl=edit-00">a katz</a> / <a href="http://www.shutterstock.com/editorial?cr=00&pl=edit-00">Shutterstock.com</a>


Illustration by Richie Pope

One morning in April 2015, Rayid Ghani was sitting among more than a dozen big-city police chiefs and officials in a fourth-floor conference room across the street from the White House. It was the latest in a series of meetings about curbing police abuses that the Obama administration had urgently called. The day before, a cellphone video had emerged showing a white South Carolina cop shooting an unarmed black man in the back, sparking another wave of Black Lives Matter protests and eventually prompting an FBI investigation. Ghani didn’t know much about law enforcement, having spent most of his career studying human behavior—things like grocery shopping, learning, and voting. But the Pakistani-born data scientist and University of Chicago professor had an idea for how to stop the next police shooting.

Back when he worked for the consulting company Accenture, Ghani had figured out how to guess the final price of an eBay auction with 96 percent accuracy. In 2012, he served on Obama’s reelection campaign, pinpointing supporters who were most likely to shell out donations. Ghani now believed he could teach machines to predict the likelihood that cops would abuse their power or break the law. It was, he thought, “low-hanging fruit.”

Experts have long understood that only a small fraction of cops are responsible for the bulk of police misconduct. In 1981, when research showed that 41 percent of Houston’s citizen complaints could be traced to 12 percent of the city’s cops, the US Civil Rights Commission encouraged every police department to find their “violence-prone officers.” Ever since, most major departments have set up a system to identify so-called bad apples. These systems typically use software to flag officers who have received a lot of citizen complaints or have frequently used force. But each department’s model is different and no one really knows how well any of them work. Some may overlook officers with many red flags, while others may target cops who haven’t broken any rules. What’s more, the police chiefs at the White House meeting had a hunch that the bad apples were gaming their systems.

Ghani saw a different problem: The departments simply weren’t using enough data. So he made the top cops gathered in Room 430 an offer. If they handed over all the data they’d collected on their officers, he’d find a better way to identify the bad cops.

Identifying problem cops was an obvious priority, but Ghani also wanted to predict who was most likely to misbehave in the future.

The Charlotte-Mecklenburg Police Department in North Carolina signed up, agreeing to give Ghani and his team 15 years’ worth of personnel records and other data, provided that its officers’ identities remained anonymous. Charlotte was a good test lab for Ghani’s project. It had also had two recent police shootings; the case against one officer ended in a mistrial, and the other officer was never charged.

Since 2001, Charlotte had flagged officers for review based on certain criteria, like if the cop had used physical force against a suspect three or more times over the past 90 days. Once an officer was flagged, an internal affairs team would decide whether to issue a warning or to notify his supervisor. But the criteria were built on “a gut feeling,” explains Chief Kerr Putney. “It was an educated guess, but it was a guess nonetheless. We didn’t have any science behind it.” When Ghani’s team interviewed cops and supervisors, almost everyone said the system failed to account for factors like what neighborhoods the officers patrolled or which shifts they worked.

The system also created a lot of false positives, dinging more than 1,100 cops out of a 2,000-person force. “The officers felt like we were accusing them when they didn’t do anything wrong,” Putney says. Out on the street, cops were concerned accidents or even justified uses of force might be seen as foul play. When Ghani’s team dove into the data, they discovered that nearly 90 percent of the officers who had been flagged were false positives. “It was a huge eureka moment,” Putney says.

Identifying who was truly a problem cop was an obvious priority, but Ghani also wanted to predict who was most likely to misbehave in the future. So his team started to mine more data—any available information on the stops, searches, and arrests made by every Charlotte officer since 2000. In the end they analyzed 300 data points, trying to find which ones could best predict an officer’s chances of acting badly.

Ghani’s first set of predictions was shaky; it still incorrectly flagged about 875 officers, though it did correctly identify 157 officers who wound up facing a complaint or internal investigation within the following year—making it 30 percent more accurate than Charlotte’s previous model.

“I don’t know if this will work at every department,” he says. “But it’s going to be better than what it is now.”

It came as no surprise that Ghani’s team eventually found that one of the best predictors of future problems was a history of past problems—like using unjustified force or getting into car accidents, for example. But the team also confirmed something many experts and officers had long suspected but could never demonstrate: Officers subjected to concentrated bouts of on-the-job stress—handling multiple domestic-violence or suicide calls, or cases involving young children in danger, for example—were much more likely to have complaints lodged against them by community members. “That’s something we’ve known anecdotally, but we’ve never seen empirical evidence before,” explains Geoffrey Alpert, a criminologist at the University of South Carolina.

Ghani’s research is already spurring changes in Charlotte. His team found that when three or more officers responded to a domestic-violence call, they were much less likely to use force than when only two officers were called to the scene. Putney says that realization has led his department to rethink how it handles emotionally charged incidents. He is eager to see what Ghani’s research says about shift rotations as well. Often, the youngest and least experienced cops get stuck on night shifts, which tend to be the most stressful and violent, and “where they can become desensitized and calloused,” he says. Putney also hopes to use Ghani’s research as a guide for traits to look for when hiring new officers. He is circumspect, though, about the ability to accurately foresee a police officer’s behavior. Some variables will always be unpredictable, he says, like when things go wrong at 3 a.m. But with 300 data points, he adds, “maybe there’s some science behind this after all.”

Ghani agrees there are limitations to his big-data approach. Even the most accurate predictions won’t eliminate bad cops. Preventing abuses may require a wider look at how officers are recruited, trained, counseled, and disciplined—as well as addressing personal and systemic biases. Without that layer of human intervention and analysis, personnel decisions based on predictive data alone could ricochet through a police department, harming morale and possibly making things worse.

“This is the first step,” Alpert says. “It may not be a panacea, but we’ve got to start thinking differently.” Eventually, Ghani says, data from dashboard and body cameras will factor into his calculations, and his system will help dispatchers quickly decide which officer is best suited to respond to a certain type of call at any given moment. He hopes most large police departments will adopt prediction models in the next five years. Most of the police officials at that White House meeting have said they’d like to work with him, and his team is negotiating with the Los Angeles County sheriff and the police chief of Knoxville, Tennessee. “I don’t know if this will work at every department,” he says. “But it’s going to be better than what it is now.”

Fact:

Mother Jones was founded as a nonprofit in 1976 because we knew corporations and billionaires wouldn't fund the type of hard-hitting journalism we set out to do.

Today, reader support makes up about two-thirds of our budget, allows us to dig deep on stories that matter, and lets us keep our reporting free for everyone. If you value what you get from Mother Jones, please join us with a tax-deductible donation today so we can keep on doing the type of journalism 2022 demands.

payment methods

Fact:

Today, reader support makes up about two-thirds of our budget, allows us to dig deep on stories that matter, and lets us keep our reporting free for everyone. If you value what you get from Mother Jones, please join us with a tax-deductible donation today so we can keep on doing the type of journalism 2022 demands.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate