Biased Algorithms Are a Racial Justice Issue

Increasingly, life-or-death decisions made by our government are being automated

Dave Gershgorn
Momentum

--

The Atlanta Police Department displays a city map through PredPol, a predictive crime algorithm used to map hot spots for potential crime at the Operation Shield Video Integration Center on January 15, 2015, in Atlanta, Georgia. Photo: Christian Science Monitor/Getty Images

Decisions on where to send police patrol cars, which foster parents to investigate, and who gets released on bail before trial are some of the most important, life-or-death decisions made by our government.

And, increasingly, those decisions are being automated.

The last eight years have seen an explosion in the capability of artificial intelligence, which is now used for everything from arranging your news feed on Facebook to identifying enemy combatants for the U.S. military. The automated decisions that affect us the most are somewhere in the middle.

A.I.’s big feature is essentially pattern matching. The easiest way to understand it is by thinking about pictures of dogs. If you show an image recognition algorithm hundreds or thousands of images of dogs, it will start to draw conclusions on what a dog is based on the similarities of the images. That could be the shape of pointed ears, fur, paws, a snout, or a nose. This analysis of data is called “training.” After it’s been trained, when asked to look at another picture, the algorithm could tell if anything matches its idea of a dog.

--

--

Responses (3)