Publications

Most recent publications

‘I’m Glad That Was On Camera’ by A Sandhu in Policing and Society 2017

This article presents the central findings of the Police on Camera project and offers details into the strategic orientation many research participants expressed when asked about policing on camera.

Camera-Friendly Policing by A Sandhu in Surveillance & Society 2016

This article argues that instead of engaging in counter-surveillance, police officers allow themselves to be recorded and engage in what I call “camera-friendly policing,” which involves efforts to control how they are perceived while video recorded.

Policing on Camera by A Sandhu and K Haggerty in Theoretical Criminology 2015

This article outlines the findings of one of the first studies examining how police understand and respond to cameras and photographers.

Other Publications

High-Visibility Policing by A Sandhu and K Haggerty in Oxford Handbooks Online 2015

This article analyses the situation surrounding police visibility and questions the extent to which videos of the police are producing uniformly negative outcomes for them

Private Eyes by A Sandhu and K Haggerty in Handbook of Private Security Studies 2015

This article analyses the extent to which videos of the police are producing uniformly negative outcomes for them. As co-authors, Kevin Haggerty and I shared all duties.

The Police’s Crisis of Visibility by K Haggerty and A Sandhu in IEEE Technology & Society 2014

This article discusses the increasingly fraught relationship police

Boxing

Much needed stress relief after a very tough few months. Haven’t boxed in years. Nothing calms my soul more. Everything else – all the essays, the reviews, the data collection, the editing, the lectures, the students – somehow fade into the background. All there is, is your fists and the target. In many ways, this is what got me through my last few years as a PhD student. It was one of the few ways I remembered to give my mind a rest.

Amnesty International’s Tanya O’Carroll on the ‘nothing to hide, nothing to fear’ argument

AmnestyInternationalLogoI recently interviewed Tanya O’Carroll, a Technology and Human Rights advisor at Amnesty International, to discuss government surveillance and its impact. I framed our discussion around the most common response researchers studying surveillance receive from the public: the “nothing to hide” argument. The nothing to hide argument alleges that government surveillance programs serve a security purpose and should not to be opposed by innocent people. This blog outlines O’Carroll thoughts about the nothing to hide argument and it’s flaws, the importance of privacy rights, and the ‘encryption mentality’ that she thinks should replace the nothing to hide argument.

Explaining the Nothing to Hide Perspective

The tone of O’Carroll’s work changed radically in June 2013, when Edward Snowden revealed the extent of the US and UK governments’ surveillance program. “I remember it literally almost like a war room straight afterwards,” O’Carroll recalled, “It was time to massively beef up our work, to ask ‘what are we doing to tackle technology as a human rights issue.’” Since then, O’Carroll’s work has attempted to raise awareness about the “dark side” of technology by examining how digital and online technologies in particular are used to conduct bulk surveillance.

Unfortunately, O’Carroll faces a significant barrier; the public have dismissed arguments exposing the harms of bulk surveillance and stated that they have “nothing to hide” and therefore “nothing to fear” from government monitoring. “That’s the first thing that most people say to us,” explained O’Carroll, “all of the comments over and over again are saying ‘yeah, okay, fine, it might be an abstract violation of this abstract right called the right to privacy, but, if I’m not doing anything wrong, why do I really care whose looking at my completely boring and mundane text messages and emails.’”

O’Carroll speculates that the nothing to hide perspective is born of a lack of information about how surveillance data is used to control. Most of us do not realise, she explained, that the employment opportunities we receive, the insurance prices we paythe treatment we receive from police, are increasingly the result of decisions made by algorithmic analyses of surveillance data. Most of us also do not realise, O’Carroll added, that these decisions can facilitate discriminatory hiring practices, exploitative pricing, and invasive police monitoring (see Weapons of Math Destruction by Cathy O’Neil) Even the political information we receive can be determined by algorithms which personalise the political news that appears on our websites. The result, “…doesn’t look like propaganda,” O’Carroll clarified, “it doesn’t look like a big democrat or republican or Brexit campaign poster on the side of the street. It looks like a post on your Facebook feed that is directly appealing to your tastes and interests.” Despite the seemingly harmless look, O’Carroll continued, these Facebook posts can have a significant influence on our political leanings and voting behaviour by creating “filter bubbles” that reinforce pre-existing biases and political polarisation.

Unfortunately, O’Carroll admitted, few consider such issues. There is a lot of “terrorism theatre,” she explained, which tells the public that regulations limiting bulk surveillance can undermine their safety and security. As a result, the public can be quite passive in the wake of government surveillance programs. This may also be a consequence of “…a failing of some us digital and privacy advocates,” O’Carroll added, “we’ve been so stuck in the sort of abstract or theoretical debate about privacy that we’ve failed to communicate its importance to people.”

The Importance of Privacy

Furthermore, when considering the value of privacy, O’Carroll added, it is important to remember the circumstances of those who face a disproportionate amount of surveillance. “The big eye in the sky is not aimed equally at everyone,” she explained, “I say this to my friends and my family every time the debate comes up. I defend [privacy rights] not just for myself. I defend [privacy rights] because there are other individuals who are unfairly treated as suspicious by the state.” The value of privacy then, can be found in how it serves those who are the most disadvantaged among us, those most likely be targeted by intrusive surveillance. To argue that surveillance is harmless and should be tolerated is a privileged position which ignores the experiences of the disadvantaged. “I think it is the time to put a battle cry out for privacy again,” O’Carroll concluded, “[…] it is the time for us to really stand up for the right to privacy.”

Encryption Mentalities

Standing up for the right to privacy can involve changing how we vote, joining pro-privacy protests, and/or writing to our local political representatives. However, it need not be so formal. Standing up for privacy rights can also involve changing our everyday behaviour by obstructing government surveillance. According to O’Carroll, this means developing a new mentality to replace the nothing to hide perspective.

To illustrate this, O’Carroll reflected on the mentality, which she later called an “encryption mentality,” that she’s developed since the Snowden revelations. She started by offering an analogy concerning how our attitudes about the safety of seatbelts have developed over time. “We’ve evolved to understand that if you walk out in front of a moving large piece of metal, also known as a bus, it is bad. So, you don’t want to do that. We didn’t necessarily evolve the mentality that when we are in a car, a seatbelt helps protect us. It’s a more abstract idea, because you can sit in a car and feel quite comfortable not wearing a seatbelt. We have to hammer it into our head, combined with law, that we have a better chance of surviving if we are wearing a seatbelt.” Overtime, O’Carroll explained, “people develop a feeling that when you get into a car, if you don’t wear your seatbelt, you feel exposed.”

Similarly, O’Carroll continued, she has developed a feeling of being exposed when she does not encrypt her emails or texts. “I have reached a point where I feel like I’m not wearing my seatbelt when I’m not encrypting things. I think I probably use end-to-end encryption for 20 to 30 percent of all of my communications now. It’s second nature for me to encrypt.” This is the mentality that she thinks should replace the nothing to hide argument. “I think that is where we need to get to as a society, so it becomes second nature, like wearing a helmet on a bike, or a seatbelt in your car. You don’t have to do it all the time, but you start to want to and you feel safer when you do.”

To help hammer home her message about the harms of surveillance and the importance of encryption, O’Carroll is working on a project which challenges governments’ claims that surveillance is not a human rights concern. “We are set to show that there is harm and that it is not just that the right to privacy is violated…it can also be discrimination. Not everybody is equal in the eyes of surveillance, and it disproportionately impacts certain communities. So, we are looking specifically at the impacts of mass surveillance programs in the police and security sectors and the impacts that those powers […] on already over-policed communities.”

You can find out more about Tanya O’Carroll’s work about the human rights concerns raised by surveillance and big data by following her on Twitter.

“People just don’t get it” an interview with Kade Crockford of the ACLU of Massachusetts about why surveillance issues aren’t getting the attention they deserve

blue_safe_free_lgbt_torch_edited-2_0The precarious state of privacy often fails to stir public attention. For example, the Investigatory Powers Act (IPA), a piece of legislation granting police and intelligence agencies sweeping surveillance powers in the UK, is said to have passed into law “with barely a whimper.” What explains this lukewarm response? How does the US install bulk surveillance programs like Total Information Awareness (TIA) or the UK pass privacy threatening bills like the IPA (sometimes called the “snooper’s charter”) without receiving the level of attention that one might expect from a society which claims to value privacy rights?

To help answer this question, I spoke to Kade Crockford, the director of the Technology for Liberty Program at the American Civil Liberties Union of Massachusetts (ACLUM). I spoke to Crockford because of her expert knowledge on issues related to privacy, security, and surveillance as well as her recent experience leading a campaign against the Boston Police Departments’ plan to buy social media spying software. Crockford played a central role in the pro-privacy advocacy which likely encouraged the Boston PD to scrap their plans. I thought that Crockford could offer insights into why surveillance practices aren’t earning a critical response and how to reverse this trend.

Early in our conversation, Crockford acknowledged that the public is quite passive in their response to surveillance bills and privacy violations. According to Crockford, this passivity comes in two parts: (1) fears of crime and terrorism and (2) a failure to realise the role that surveillance plays in our daily experiences and entitlements.

  1. Fear of crime and terrorism

When I asked her why the public doesn’t seem especially concerned about threats to their privacy, Crockford suggested that I compare those threats with other more “visceral” threats such as crime and terrorism. “It is really obvious to people that they don’t want to be murdered, because you can think about yourself dead on the ground or you don’t want to be robbed because you like your stuff, and its tangible and right in front of you.” These fears, Crockford argued, keep citizens preoccupied with crime and terrorism, and tolerant of surveillance which is often thought of as a tool used to keep them safe. “Nobody is going to object to [surveillance],” Crockford explained, “because [nobody] wants to be killed.”

Crockford was, unsurprisingly, critical of this fearful point of view. “The reality is that terrorism is exceedingly rare to the point where it is non-existent in the vast majority of places in this country […]. It is just an infinitesimally small risk of dying in a terrorist attack.” According to Crockford, this suggests that the affect, and perhaps purpose, of state surveillance has less to do with anti-terrorism and more to do with “what [law enforcement agencies] have always done, which is to harass and incarcerate black and brown people typically because of drug crimes.”

  1. The impact of surveillance on daily life

The second reason threats to privacy receive only a lukewarm response concerns a lack of awareness about the power and impact of knowledge-production through surveillance. “Most people who just go about their ordinary lives not really thinking about how much knowledge is power. It’s cliché, we say knowledge is power… but people don’t actually understand what that means… they don’t actually know what can happen when you amass [sensitive knowledge] about other people.”

To illustrate her point, Crockford referred to the passive attitude most people take to data about their online shopping. “People often say I don’t really care if Google monitors me because, yeah it’s a little weird when I go to this one website and I look for a pair of shoes and then I go to another website and the shoes have followed me all around the internet. Yeah, that’s a little weird, but it’s just shoes. What people don’t realise is that that same process that is used to collect information about what kinds of shoes you want and then target you with advertising, can also be used to narrow, constrict, and effectively control choices that you would be much more freaked out about.”

“For example,” Crockford continued, “political choices that you are making, choices that are determining what University you are going to, or if you are going to go to University instead of going into vocational training, choices related to what kinds of credit score you are going to get or whether or not you will be approved for a home mortgage loan. These are the kinds of things that people simply do not understand. These decisions are made by other people for them, based on information that is collected about them.” Crockford was referring to the way that surveillance data is used to categorise people, and grant or deny them access to opportunities ranging from the educational and occupational to the economic and political. For instance, our access to political information is increasingly shaped by algorithms which collect data about our online history and then influence our newsfeeds, which may have a serious impact on political opinion and democratic elections.

Despite the powerful role that data can play in integral parts of daily life, Crockford explains that most of us aren’t aware that any of this is going on: surveillance is conducted without the traditional, obvious trademarks of binoculars or cameras. Instead, our information is collected from a distance, without consent, and through digital processes. All we see is “what’s in front of you on Google.com.”

Crockford confirms many of the concerns that surveillance scholars have discussed in recent work about the developing surveillance societies around the world: the loss of privacy is becoming a norm, and lack of awareness about the resulting impacts means citizens are not voicing much concern. Education about the consequences about the benefits and risks of surveillance is much needed.

See: https://hrcessex.wordpress.com/2017/07/13/people-just-dont-get-it-an-interview-with-kade-crockford-of-the-aclu-of-massachusetts-about-why-surveillance-issues-arent-getting-the-attention-they-deserve/#more-1517

 

The Police’s Data Visibility Part 2: The Limitations of Data Visibility for Predicting Police Misconduct

In part 1 of this blog, I suggested that raising the police’s data visibility may improve opportunities to analyse and predict fatal force incidents. In doing so, data visibility may offer a solution to problems related to high numbers of fatal force incidents in the US. However, data visibility is not without limitations. Problems including the (un)willingness of data scientists and police organisations to cooperate and the (un)willingness of police organisations to institute changes based on the findings of data scientists’ work must be considered before optimistically declaring data visibility a solution to problems related to fatal force. In this blog, I discuss two addition limitations of data visibility, including low-quality data and low-quality responses to early intervention programs. Both are problems related to the prediction and intervention stages of using data to reduce fatal force incidents. Future blogs can discuss issues related to the earlier stages of using data to reduce fatal force incidents such as collection and storage of data about police work.

Low-quality data: As they are still relatively new technologies, it is hard to assess algorithms designed to predict fatal force. However, we can learn about the limitations of these algorithms by drawing on research about the police’s attempts to use algorithms to predict and pre-empt crime. “Predictive policing” adapts digital data about previous crimes to predict where crime is most likely to occur in the future and who is most likely to engage in criminal behaviour. Despite police departments recent and rapid adoption of predictive policing software such as PredPol, the effectiveness of predictive policing has been subject to critique for several reasons. Among these reasons are concerns about the low-accuracy of data fed to predictive policing software. This “input data” has been described as inaccurate and incomplete due to systemic biases in police work. For example, police officers’ tendency to focus on certain types of crime, certain types of spaces, and certain social groups, while leaving other crime and other spaces unaddressed, creates unrepresentative data suggesting problematic correlations between impoverished spaces, racial groups, and crime. When analysed by predictive software, this biased data is likely to produce predictions which contain both false positives and false negatives. Accordingly, if high-quantity but low-quality input data is used to predict fatal force incidents, similar problems may arise. For example, input data which is inaccurate, incomplete, or skewed (the result of police organisations’ failure to accurately document police work, especially use of force incidents) may result in inaccurate calculations from predictive software. These inaccurate predictions will then inform early intervention programs targeting low-risk officers and/or the neglect of high-risk officers.

Low-quality response: In addition to concerns about their ability to accurately identify high-risk officers, there are several concerns about the practicalities of early intervention programs. For instance, there are reasons to believe that, if high-risk officers are accurately identified by data scientists, interventions will not be taken seriously. A police department in New Orleans, for example, faced difficulties persuading its officers to take interventions seriously after they began to mock data collection efforts, and even considered being flagged as high-risk by an algorithm as a “badge of honour.” Some officers began to refer to interventions such as re-training programs as a “bad boy school” and saw inclusion as a matter of pride rather than something to be taken seriously. The problems with getting officers to take interventions seriously suggests that even if data scientists can construct an algorithm which accurately flags high-risk officers, there is no guarantee that ensuing attempts to improve police behaviour will be effective, especially if police are unwilling to accept interventions. Furthermore, even if officers do not ridicule interventions, there is no guarantee that interventions will receive the support from police organisations that may be required. For example, studies show that interventions often suffer from administrative neglect, delays, and can be error-ridden and sloppy, leading to a failure to transform the organizational and social culture of a police department.

Conclusion

By raising police officers’ data visibility, police organisations, with the help of data scientists, can engaging in comprehensive analysis of fatal force incidents, and produce programs designed to identify high-risk officers and successfully intervene through re-training, counselling, or substantive changes to use of force policy. However, several unknowns play a key role in determining the implications of data visibility and predictive analytics including the inclusion/exclusion of data, false positives/negatives, and social forces which determine if interventions will be taken seriously by officers. Each of these unknowns requires detailed study before trying to walk a logical pathway from data visibility to a reduction in fatal force incidents.

Originally posted via the HRBDT Blog. 

The Police’s Data Visibility 1: how data can be used to monitor police work and how it could be used to predict fatal force incidents

The CountedFatal Force, and Mapping Police Violence websites each collect, store, and display data about people killed by police in the United States. These websites are just a few of the emerging platforms designed to address the significant gap in information left by US police organisations’ failure to create, maintain, and publically disclose data about “fatal force” incidents. When visiting any of the three websites mentioned above, visitors can access in-depth statistics, charts, graphs, and maps, which provide details about the number of fatal force incidents that have occurred, their locations, the identity of officers involved, and the demographics of victims. The availability of this information has solicited questions about if and how digital data can address persistent problems related to a lack of transparency and accountability in policing, and the lack of information about fatal force incidents:

  • Can data enable new opportunities to scrutinize fatal force incidents?
  • Can data provide an opportunity to discover trends associate with fatal force incidents?
  • Can data analysis provide the police with the knowledge required to reduce fatal force incidents?

This two-part blog focuses on the last question by considering the opportunities and limitations of using digital data to monitor police work, document fatal force incidents, and create intervention programs designed to reduce fatal force incidents. 

Police Visibility and Dataveillance  

Recent literature suggests that police officers are among the most extensively monitored subjects in today’s surveillance society. To understand the nature of the surveillance of police officers, scholars have built a taxonomy which splits “police visibility” into three subcategories entitled primary, secondary, and new visibility. Unfortunately, this taxonomy focuses exclusively on the visual contributions to police visibility made by cameras. To understand the contributions which digital data make to police visibility, I propose expanding the taxonomy of police visibility by exploring how non-visual forms of surveillance are used to monitor police. Examples of non-visual surveillance targeting police include “dataveillance” mechanisms which collect data about police officers’ emails, radio communications, internet browsing history, movements, as well as information from police databases documenting traffic stops, fines, arrests, use of force incidents, and more. Because of the variety of ways that dataveillance can be used to monitor police officer, officers can be described as having a high “data visibility,” referring to the perceptibility of police officers and police work because of the production and brokering of associated data.

To nuance this blog’s conversation about the police’s data visibility, I focus on the impact of dataveillance mechanisms which document fatal force. More specifically, I focus on if and how dataveillance can be used to analyse and reduce fatal force incidents. Fatal force is given particular attention due to high rates of fatal force incidents in the US (documented in this Amnesty International report), as well as my interest in human rights concerns relating the balance between police powers and the right to life, liberty & security of person, privacy, and the prohibition of discrimination.

How Can Digital Data Reduce Fatal Force Incidents? 

Digital data can be used to reduce fatal force incidents in at least two ways. The first method involves using digital data to study fatal force incidents and make related changes to policy and practice. Studying fatal force is made possible once brokers (including police organisations, journalists and “citizen journalists”) collect, share, and display data about fatal force incidents, creating opportunities for long-form analysis, including the search for patterns in the data. For instance, by enabling the filtering of data, websites like Fatal Force, the Counted, and MappingPoliceViolence allow users to discover patterns such as the consistency with which incidents of police shootings involve males, the high number of incidents in which mental illness plays a role, or the consistency with which the deceased were carrying firearms. Knowledge about patterns in fatal force incidents can then be used to inform changes to use of force policies, and/or to improve training concerning interactions with persons who are mentally ill or interactions with individuals who are armed.

The second and more proactive method of using digital data to reduce fatal force incidents involves predictive analysis and the pre-emption of fatal force. For example, based on the analysis of fatal force data, data scientists may discover early warning signs (early warning signs might include poor performance during training or high numbers of complaints related to use of force) which consistently seem to precede fatal force incidents. If data scientists’ findings are reported to police organisations, those organisations can search for officers who display early warning signs and flag them as high-risk. Police organisations can then initiate early intervention programs which may include re-training or counselling high-risk officers, or perhaps temporarily reassigning high-risk officers to positions in which interactions with citizens is unlikely. Unlike the primary, secondary, and new visibility, the police’s data visibility may, therefore, provide the opportunity to not only document fatal force, but predict and pre-empt it. Note that early intervention programs would not imply that fatal force is always egregious. Rather, intervention program would be an effort to reduce fatal force incidents as much as possible while also protecting fatal force interventions that are deemed necessary and unavoidable.

While the predictive potential of data about police work may sound fantastical to some, recalling images of Minority Report, data scientists are already testing algorithms designed to predict use of force incidents in the US. For example, data scientists at the University of Chicago’s Centre for Data Science and Public Policy have developed a prototype algorithm, based on data provided by the Charlotte-Mecklenburg Police Department (CMPD), to identify early warning signs that may predict adverse interactions between police and the public. Early tests show that the Centre’s algorithm has successfully identified officers who were later involved in adverse interactions with the public. Based on successful tests, data scientists have already translated their research into Flag Analytics, a company designed to commercialise the Centre’s algorithm and make it available to police organisations across the United States. Similar studies are exploring how data can be used to document and predict the effective use of stop-and-search powers to limit discriminatory decision making.

Conclusion

The above discussion demonstrates that there may be a reason to be optimistic about the implications of the police’s growing data visibility for analysing and reducing fatal force incidents. However, as part 2 of this blog will suggest, there are also several limitations to be aware of before declaring data visibility a panacea to problems related to high rates of fatal force incidents.

 

Originally published via the Human Rights Centre Blog

Data Driven Policing: Highlighting Some Risks Associated with Predicting Crime

 

Workstream 2 of the Human Rights, Big Data and Technology Project (HRBDT) recently submitted a piece to the Home Affairs Committee in support of the ‘Policing for the future: changing demands and new challenges’ inquiry. The submission summarized the opportunities and risks associated with the police’s adoption of ‘data driven technologies,’ which aid the bulk collection, storage, and analysis of data about persons, places, and practices. This blog recaps the conclusion of our submission by highlighting the risks of adopting data driven technologies used to predict crimes.

Data Driven Technologies and Predicting Crime

Data driven technologies are central to daily interactions with legal, corporate, and social institutions. For example, online retailers rely on data driven technologies to: collect data on consumer habits, analyse this data to discover consumer trends, and use knowledge of these trends to make predictions about future consumer behaviour. Retailers capitalise on predictions in the form of purchase recommendations and personalized ads. 

When adopted by police, data driven technologies serve a similar function by collecting crime data, analysing this data to determine crime trends, and using knowledge of these trends to make predictions about future crimes, hence the term “predictive policing.” Predictions are subsequently used to inform the allocation of police resources.

There are many types of predictive policing.[1] For example, some predictive technologies attempt to predict victims (those most likely to be victims of crime), others predict offenders (those most likely to commit crime in the future), and others predicting the locations where crime is likely to occur in the future, also known as “hot spots.” A contemporary example of the latter is the Los Angeles Police Department’s use of PredPol software,[2] which uses three data points (time, place, and type of recent crimes) to identify hotspots.

PredPol is now used by several police departments in the UK and UK including Kent Police. Early studies suggest PredPol may have contributed to a 6% reduction in crime rates in North Kent.[3] However, research also identifies significant risks that must be addressed when considering the adoption of predictive software.

Three Risks of Data Driven Policing

1.     Data Quality: The effectiveness of predictive software relies on the quality of input data. If input data is inaccurate, incomplete, or skewed, this will significantly affect the quality of predictive outputs made by predictive software. For example, deficient input data may result in false positives (no crime in alleged “hot spots”) or false negatives (crimes in areas identified as low-risk).[4] There are many reasons that data may be deficient. For example, some crimes are consistently under reported, meaning crime data is incomplete and cannot provide algorithms with a quality representation of crime rates.[5] Furthermore, police discretion introduces a level of subjectivity into the production of crime data, which may affect the quality of that data in terms of its depiction of crimes rates.[6] Resulting crime data will present a skewed portrayal of the persistence of crimes and predictive software will be unable to accurately locate hot spots.[7]

2.     Discriminatory Capacities: the use of predictive software can result in discriminatory outcomes. Evidence suggests that some police officers continue to target members of marginalised groups and impoverished neighbourhoods.[8] As a result, crime data falsely suggests that crime rates are particularly high in impoverished neighbourhoods, not because they are the locations where crime is most common, but because they are the locations where police focus their patrols, overlooking similar crimes occurring in areas where there is limited police presence. Once this skewed data is introduced to predictive software, the software will notice that crime data suggests that most crime takes place in impoverished neighbourhoods, and predict that future crimes will follow this pattern.[9] Such predictions will funnel more police into already over-policed spaces, resulting in a self-fulfilling cycle in which the criminality of impoverished spaces is given priority by police, and crime data continues to suggest a correlation between impoverished spaces and crime. Evidence of these issues can be found in Human Rights Data Analysis Group’s[10]  study of algorithms that are trained on biased data which found that those algorithms tend to make predictions encouraging the police to focus on impoverished spaces.[11]

3.     Privacy Harms: The use of data driven technologies requires the collection of large quantities of data raising questions about the police’s contributions to mass surveillance. Such surveillance poses significant risks, include violations of privacy rights. Without privacy, citizens are left without a retreat from which to carry out unmonitored communication and self-expression. This may result in a “chilling effect” based on the premise that monitored citizens inhibit behaviour (related to sexuality for example) and conceal information (related to one’s health for example) that, if publicised, could result social exclusion and the denial of opportunities.[12]

Regulation and Oversight

Addressing these socio-legal concerns will require the adoption of regulations facilitating independent assessment of crime data. It will also require the police create oversight mechanisms that ensure use of data driven technologies comply with human rights law and address concerns about the intrusive nature of mass surveillance. Accordingly, the HRBDT recommends the creation of regulations with particular attention paid to if and how police address the limitations of crime data, and the discriminatory capacities and privacy harms related to the use of data driven technologies.

Notes

*this blog post is based on an associated submission to the Home Affairs Commitee’s Policing for the Future Inquiry available here: link

[1] Walter L. Perry et al. ‘Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations.’ RAND Corporation. 2013 Available at: www.rand.org/content/dam/rand/pubs/research_reports/RR200/RR233/RAND_RR233.pdf

[2] PredPol.com. Available at: www.predpol.com

[3] PredPol.com Blog. 15 August 2013. Available at: www.predpol.com/predpol-slashes-crime-in-north-kent/

[4] Aaron Shapiro. ‘Reform predictive policing.’ Nature. 25 January 2017. Available at: www.nature.com/news/reform-predictive-policing-1.21338

[5] UKCrimeStats. Available at: www.ukcrimestats.com/AboutData/

[6] A. Keither Bottomley and Clive Coleman. Understanding Crime Rates: Police and Public Roles in the Production of Official Statistics. 1981.

[7] Alan Travis. ‘Police crime figures losing official status over claims of fiddling.’ The Guardian. 15 January 2014. Available at:  www.theguardian.com/uk-news/2014/jan/15/police-crime-figures-status-claims-fiddling

[8] Laurel Eckhouse. ‘Big data may be reinforcing racial bias in the criminal justice system.’ The Washington Post. 10 February 2017. Available at: www.washingtonpost.com/opinions/big-data-may-be-reinforcing-racial-bias-in-the-criminal-justice-system/2017/02/10/d63de518-ee3a-11e6-9973-c5efb7ccfb0d_story.html?utm_term=.d7933b3b28da

[9] Matt Stroud. ‘The minority report: Chicago’s new police computer predicts crimes, but is it racist?’ The Verge. 19 February 2014. Available at: www.theverge.com/2014/2/19/5419854/the-minority-report-this-computer-predicts-crime-but-is-it-racist

[10] Kristian Lum. ‘Predictive Policing Reinforces Police Bias. Human Rights Data Analysis Group. 10 October 2016. Available at: https://hrdag.org/2016/10/10/predictive-policing-reinforces-police-bias/

[11] Kristian Lum and William Isaac. ‘The predict and serve?’ Royal Statistical Society. 7 October 2016. Available at: onlinelibrary.wiley.com/doi/10.1111/j.1740-9713.2016.00960.x/full

[12] Daniel J Solove. ‘Nothing to Hide: The False Tradeoff between Privacy and Security.’ London: Yale University Press.