Companies such as Facebook have begun using algorithms and historical data to predict which of their users might commit crimes. Illustration: Noma Bar
The police have a very bright future ahead of them – and not just because they can now look up potential suspects on Google. As they embrace the latest technologies, their work is bound to become easier and more effective, raising thorny questions about privacy, civil liberties, and due process.
For one, policing is in a good position to profit from "big data". As the costs of recording devices keep falling, it's now possible to spot and react to crimes in real time. Consider a city like Oakland in California. Like many other American cities, today it is covered with hundreds of hidden microphones and sensors, part of a system known as ShotSpotter, which not only alerts the police to the sound of gunshots but also triangulates their location. On verifying that the noises are actual gunshots, a human operator then informs the police.
It's not hard to imagine ways to improve a system like ShotSpotter. Gunshot-detection systems are, in principle, reactive; they might help to thwart or quickly respond to crime, but they won't root it out. The decreasing costs of computing, considerable advances in sensor technology, and the ability to tap into vast online databases allow us to move from identifying crime as it happens – which is what the ShotSpotter does now – to predicting it before it happens.
Instead of detecting gunshots, new and smarter systems can focus on detecting the sounds that have preceded gunshots in the past. This is where the techniques and ideologies of big data make another appearance, promising that a greater, deeper analysis of data about past crimes, combined with sophisticated algorithms, can predict – and prevent – future ones. This is a practice known as "predictive policing", and even though it's just a few years old, many tout it as a revolution in how police work is done. It's the epitome of solutionism; there is hardly a better example of how technology and big data can be put to work to solve the problem of crime by simply eliminating crime altogether. It all seems too easy and logical; who wouldn't want to prevent crime before it happens?
Police in America are particularly excited about what predictive policing – one of Time magazine's best inventions of 2011 – has to offer; Europeans are slowly catching up as well, with Britain in the lead. Take the Los Angeles Police Department (LAPD), which is using software called PredPol. The software analyses years of previously published statistics about property crimes such as burglary and automobile theft, breaks the patrol map into 500 sq ft zones, calculates the historical distribution and frequency of actual crimes across them, and then tells officers which zones to police more vigorously.
It's much better – and potentially cheaper – to prevent a crime before it happens than to come late and investigate it. So while patrolling officers might not catch a criminal in action, their presence in the right place at the right time still helps to deter criminal activity. Occasionally, though, the police might indeed disrupt an ongoing crime. In June 2012 the Associated Press reported on an LAPD captain who wasn't so sure that sending officers into a grid zone on the edge of his coverage area – following PredPol's recommendation – was such a good idea. His officers, as the captain expected, found nothing; however, when they returned several nights later, they caught someone breaking a window. Score one for PredPol?
Trials of PredPol and similar software began too recently to speak of any conclusive results. Still, the intermediate results look quite impressive. In Los Angeles, five LAPD divisions that use it in patrolling territory populated by roughly 1.3m people have seen crime decline by 13%. The city of Santa Cruz, which now also uses PredPol, has seen its burglaries decline by nearly 30%. Similar uplifting statistics can be found in many other police departments across America.
Other powerful systems that are currently being built can also be easily reconfigured to suit more predictive demands. Consider the New York Police Department's latest innovation – the so-called Domain Awareness System – which syncs the city's 3,000 closed-circuit camera feeds with arrest records, 911 calls, licence plate recognition technology, and radiation detectors. It can monitor a situation in real time and draw on a lot of data to understand what's happening. The leap from here to predicting what might happen is not so great.
If PredPol's "prediction" sounds familiar, that's because its methods were inspired by those of prominent internet companies. Writing in The Police Chief magazine in 2009, a senior LAPD officer lauded Amazon's ability to "understand the unique groups in their customer base and to characterise their purchasing patterns", which allows the company "not only to anticipate but also to promote or otherwise shape future behaviour". Thus, just as Amazon's algorithms make it possible to predict what books you are likely to buy next, similar algorithms might tell the police how often – and where – certain crimes might happen again. Ever stolen a bicycle? Then you might also be interested in robbing a grocery store.
Here we run into the perennial problem of algorithms: their presumed objectivity and quite real lack of transparency. We can't examine Amazon's algorithms; they are completely opaque and have not been subject to outside scrutiny. Amazon claims, perhaps correctly, that secrecy allows it to stay competitive. But can the same logic be applied to policing? If no one can examine the algorithms – which is likely to be the case as predictive-policing software will be built by private companies – we won't know what biases and discriminatory practices are built into them. And algorithms increasingly dominate many other parts of our legal system; for example, they are also used to predict how likely a certain criminal, once on parole or probation, is to kill or be killed. Developed by a University of Pennsylvania professor, this algorithm has been tested in Baltimore, Philadelphia and Washington DC. Such probabilistic information can then influence sentencing recommendations and bail amounts, so it's hardly trivial.
Read Full Article