The hazards of policing by means of algorithm

The 2002 science fiction and action film Minority Report, based on a short story by Phillip K. Dick of The Man in the High Tower, featured a form of policing with the ability to predict with certainty who would commit murder . As reported in the film, the deployment of the system in Washington, DC successfully reduces the homicide rate to zero and encourages federal officials to consider expanding it nationwide. Only one problem: it has been manipulated and used for political and criminal purposes, taking away both the inconvenient innocent and the guilty.

It turns out that “big data” and elaborate algorithms bring us closer to Dick’s pre-crime prosecution than we imagined and help undermine the presumption of innocence in the name of crime prevention. Law enforcement agencies across the country are increasingly turning to large-scale data mining programs and algorithms known as Intelligence-led Policing (ILP), which allegedly help departments predict not only where crime might occur, but where too Identity of potential criminals.

ILP builds on principles that were first applied in the 1990s in the context of “CompStat” and other data-driven police practices. CompStat, credited with helping solve the crime problems in New York City under then-Mayor Rudy Giuliani, directed police resources to criminal “hot spots” involving individuals involved in chronic crimes against property and individuals , could be arrested in the act. The program, which was replicated in jurisdictions across the country, aimed to improve public safety through data-informed police practices that disrupt criminal behavior.

ILP sounds a lot like CompStat, but it isn’t. Rather than using aggregated data to direct police to areas of high criminal activity, ILP identifies individuals based on their criminal record, socio-economic status, neighborhood and other factors resulting from social media activity could pose an increased risk to the community. These individuals, including minors, are then subjected to increased police scrutiny, which in some cases borders on or becomes actual harassment. This goes beyond CompStat by not monitoring for crime, but for the threat of crime posed by specific people, implied by analyzing large sets of data.

How does the police work of the intelligence service look like in practice?

The Tampa Bay Times conducted an investigation into the use of ILP by the Pasco County’s Sheriff’s Department to improve surveillance of individuals found to be at higher risk for crime. The surveillance was accompanied by frequent, usually unannounced, visits from officials and increased fines and arrests for minor crimes, such as: B. Allowing a teenager to use nicotine at home or keeping chickens in a backyard. A number of people targeted by ILP and their families found this harassment and chose to leave the county entirely to avoid police surveillance. Even if the strategy works as hoped, it seems clear that the point is not to prevent crime, but to move potential crime elsewhere.

Fresno, the California State Police Department, has gone further, embedding intelligence practices into the city’s 911 system. When an emergency call is received, operators consult a program that gives them a “threat assessment” for the address and the affected residents in order to prepare officers and other first responders for potential problems. The civil parties may not and almost always be unaware of the threat level assigned to them, which increases the likelihood of misunderstandings and misunderstandings between them and the police. A Fresno city councilor, who asked for its own threat assessment in an open hearing, learned that while his home address was rated “green” (low risk), it received a “yellow” rating (medium risk). Department officials couldn’t say for sure why the city council address posed a higher risk, but speculated that it might have something to do with the actions of a previous resident.

How many people know that the terminology or images they use on popular sites like Facebook and Instagram may influence police attitudes and behavior towards them without their knowledge or consent?

In addition to the potential for abuse, error, and the increased use of aggressive police practices, ILP raises fundamental questions of fairness in the way they are applied, as well as serious constitutional questions of due process and the presumption of innocence.

The first question to be addressed is whether such programs are understandable to the police force who use them. As has been stated in the context of artificial intelligence in general, it is often difficult for programmers of sophisticated algorithms to understand how these systems arrive at their conclusions. Unless the programmers know (and often do not share what they know to protect proprietary knowledge), the users of the police information may not be able to know, let alone independently assess the validity and reliability of the reports received.

This challenge is reminiscent of the wider debate about the use of algorithmic risk assessment programs in the criminal justice system. Citizenship watchdogs and advocates of criminal justice reform have argued that such programs are inevitably race, neighborhood, and class biased, leading to biased decisions about the incarceration of an accused and the length of sentences for those convicted. ILP is stepping up its stakes in this debate. Potential biases due to individual characteristics, criminal history, and other factors are extended not to matters related to actual crimes (such as pre-trial or judgment decisions), but to crimes that may be committed.

Criminologists sometimes describe the criminal justice system as “sticky” – once you have a criminal record, there is a chance that you will encounter the police and the courts again for parole / probation violations or minor felons that can either be sent back to jail. increases. Criminal records also make it far more difficult to get a job or find stable housing. The criminal justice system then becomes part of a self-fulfilling prophecy of future criminal offenses. ILP may become part of this increasingly dense network of law enforcement and law enforcement agencies, leading to relapses and re-incarceration.

Another problem with ILP is that citizens are not informed about the breadth and volume of the data that is “scraped” from public sources and social media accounts and then used to create the risk profiles. Like many people, it is known that the terminology or images they use on popular sites like Facebook and Instagram may inform the attitudes and behavior of the police towards them without their knowledge or consent – and without the ability to draw the conclusions in question put? Negative data, or inferences from that data, threatens to become some kind of extrajudicial indictment, with the risk of sweeping unsuspecting citizens and exposing them to future higher-risk law enforcement encounters without their knowledge.

Much has been written about how Chinese Communist Party leaders use big data and video surveillance to develop and apply “social credit scores” that determine ideological and behavioral reliability and distribute social and economic privileges accordingly. The question, in my opinion, is whether ILP does not share some of the characteristics of this system and subjects individuals to some kind of predictive analysis designed to tell us who is likely to commit a crime and who is not. Such a system threatens to turn core elements of American constitutional freedom like the presumption of innocence and due process on their heads: henceforth, individuals with criminal records run the risk of being charged with crimes they have not (and possibly never) committed Denied the opportunity to understand where and how the allegation arose or to question their risk status.

Given these potential pitfalls, it is a wonder no further questions were raised about ILP and its implications for freedom and justice. I suspect the answer lies in two factors. First, law enforcement occupies a privileged position in American society as the “thin blue line” that stands between civilization and chaos. What law enforcement in general is asking for, and what it has been asking for lately, are resources that make it look more and more like the military with ever increasing levels of armament. ILP is another technique adapted for military and international intelligence that further blurs the line between a security group defending the nation against foreign attackers and a police force mandated to serve and protect. ILP is another step on a path that can lead to citizens becoming enemies.

Second, the reality is that ILP is primarily a tool for combating the worst crimes, and the communities most affected by such crimes are mostly populated by low-income and minority groups. If ILP abuses do occur, they are most likely occurring in areas that are far from the oversight and concern of the majority of Americans. This is another problem that affects “others” rather than ourselves, and it contributes to and exacerbates other inequalities in communities that already bear the brunt of discrimination, social disadvantage, over-policing and excessive incarceration. If we want a better understanding of the types of practices that exacerbate tensions between police and communities, an investigation into the use of ILP could be a good place to start.

Comments are closed.