Justice in the Age of Big Data

Tucker W
2 min readMar 22, 2021

In Weapons of Math Destruction, written by Cathy O’Neil, chapter 6 of the book is about how Justice is affected by Weapons of Math Destruction or WMD’s. O’Neil introduces the topic of skewed justice using PredPol, a WMD that uses an algorithm to detect hotspots for future crime using historical data. Initially PredPol was viewed as a savior to many small police departments because it helped them be significantly more efficient in policing routes/patrols. First impressions of using a technology like this made it seem as though crime would plummet when a police force used it. However, this is not exactly the case. The issues with predictive technology is that it can become biased with the data it is given. When out on patrol police will submit many points of data that contain very minor crimes like procession of weed or underage drinking. Theses data points will skew that data of the predictive algorithm forcing it to focus more on the area where these negligible reports originated. These areas mainly consist of minorities or lower income individuals. The algorithm begins to then transform from a crime prevention algorithm to a class policing algorithm. As O’Neil says, it makes being poor a crime. The algorithm will not target any white-collar crime since it does not contain much data of it at all. It will only target lower class areas since so many data points of minor crime have been entered into the system.

O’Neil provides some solutions to the problem but not about the WMD itself. She suggested to see the effectiveness of prisons, more specifically length of imprisonment and psychological effects of solitary confinement. She believes that identifying the crucially ignored data like this could help prevent inmates from coming back to prison and help end the viscous cycle of crime for an individual. The last example of O’Neil used was about a police force that attempted to build trust with their community by making an arrest their absolute last resort. She concluded that this seemed extremely effective however it would be difficult to translate this into an algorithm since trust is not exactly quantifiable currently.

My biggest takeaway from this chapter is just the slight realization of how much trust we put into these algorithms without doing more testing or feedback from the algorithms. It seems like once they seem to do the job well, we just let them run free without first looking for potential consequences it could bring with it.

--

--