We all be sure humans are imperfect. We’re subject to biases and stereotypes, and when these come into play in the criminal justice methodology, the most disadvantaged communities end up suffering. It’s easy to imagine that there’s a better way, that one day we’ll find a tool that can settle neutral, dispassionate decisions about policing and punishment.
Some think that day has already arrived.
Around the boonies, police departments and courtrooms are turning to artificial intelligence algorithms to help them decide everything from where to deploy oversee officers to whether to release defendants on bail.
Supporters believe that the technology will lead to increased equitableness, ultimately creating safer communities. Others however, say that the data fed into these algorithms is encoded with defenceless bias, meaning the tech will simply reinforce historical disparities.
Learn more about the ways in which communities, policemen and weighs across the U.S. are using these algorithms to make decisions about public safety and people’s lives.