Photography in Notting Hill

I am incredibly impressed with London. The city is larger and more diverse than I realised – and I am not talking about just ethnicity and culture. London’s architecture shifts and sways giving each area a unique feel. Not every place has a comfortable feel and you aren’t very likely to leave every place with a smile (especially spaces that scream gentrification or ego) – but most places will make you feel like aiming a camera. I went to Notting Hill today which, despite being a bit pompous, is absolutely gorgeous. Here are some pictures:

‘Doxing’ and White Nationalists’ Right to Privacy

Authors: Ajay Sandhu and Danial Marciniak 

Introduction

The recent clash in Charlottesville, Virginia between rival protestors over a statue memorialising a general in the Confederate Army have raised long-debated questions about the extent to which members of hate groups – in this case white nationalists – can expect the protections of certain rights and freedoms. The most recognisable of these questions is “to what extent does the freedom of speech protect racist comments publicly stated by white nationalists?” This question has made its way to the US Supreme Court several times, including earlier this year when the justices denied the possibility of a “hate speech exception” to the first amendment. As white nationalists have found a safe haven in the freedom of speech and support from free speech advocates, anti-racist movements have found alternative methods of trying to silence expressions of racism.

In an era of social media, these alternative methods have included “doxing” which refers to the online collection and exposure of private and/or identifying information about white nationalists, often in an effort to critique racism, stigmatise white nationalists, and deter further expressions of white nationalism. As doxing requires the exposure of personal information online, new questions about the rights and freedoms of white nationalists has emerged; to what extent can members of hate groups expect their privacy to be respected? and what are the present and future consequences of denying white nationalists’ privacy? This blog expands on these questions by considering the potential impact of anti-racist doxing campaigns including the risky consequences of denying white nationalists’ their privacy.

Doxing white nationalists

In a recent article published by Broadly, Keegan Hankes of the Southern Poverty Law Centre’s Intelligence Project says that many white nationalists, including some who marched in Charlottesville, are “terrified” of the consequences of expressing their views online. As Hankes puts it, white nationalists recognise that “it’s hard to get a job, hard to make a living, hard to have a normal social life when all your friends and family know you believe in ethnic cleansing.”

In an era of social media, keeping such beliefs a secret is increasingly difficult as Facebook likes, Twitter posts, and Instagram pictures make it easy to for anyone to review and scrutinise white nationalists’ beliefs, preferences, and associations. The consequences are potentially life altering, as an association with white nationalism can be costly for employment opportunities, social status, and even family connections. There are many telling examples to draw from, many of which have been reported in the aftermaths of the events in Charlottesville:

  • August Cole lost his job after being identified as a member of a white nationalist group that marched in Charlottesville;
  • Peter Tefft was publicly rejected by his family (which earned favourable responses from social media users who celebrate the family’s “bravery”);
  • Nicholas Fuentes received several threats from peers for participating in the white nationalist march in Charlottesville, and subsequently left his University;
  • Christopher Cantwell, who has become infamous for his starring role in VICE media’s documentary on the white nationalist marches in Charlottesville, has been banned from a dating website.

White nationalists may have been able to stave off some of these consequences in the past by carefully managing their online profiles. According to Hankes, white nationalists are known for scrubbing images and creating alternative social media accounts in an effort to avoid the offline consequences of online racism. However, this attempt at privacy is growing difficult as anti-racists and digital activists have launched doxing campaigns intended to expose white nationalist by publishing their personal details online. Twitter users such as @YesYoureRacist have led #MaskOff doxing campaigns by linking photos of individuals participating in white nationalist protest to their social media accounts in an effort to name, shame, and, presumably, deter expressions of white nationalism. There is little if any consideration of white nationalists’ privacy.

It is not our intention to sympathise with white nationalists or to argue that their message deserves respect in any way. This blog is our attempt to raise questions about doxing as a new method of exposing racism and the long-term implications for privacy and free expression. We argue that doxing is essentially a form of surveillance with the potential to have a chilling effect on expressions of dissent including but not limited to white nationalism. Accordingly, we suggest a deep consideration of several related questions about the impact of doxing.

Does an individual’s privacy depend on their political views?  

It seems as if the standards of privacy do not apply when doxing white nationalists, whose hateful ideology is treated as a sufficient reason to collect and expose their personal information. Some might defend this approach by arguing that those who spread hateful ideologies give up their privacy, especially when their views encourage discrimination and violence. The violence of white nationalists at the Charlottesville protests may lend support to such an argument. On the other hand, it is unlikely that the majority of white nationalists encourage violence. Accordingly, doxing non-violent white nationalists implies that privacy can be ignored when targeting individuals whose politics a doxer disapproves of. We ask if this is a suitable standard of privacy; Does an individual’s privacy depend on their political views? if so, who are the doxers in charge of approving or disapproving of particular political views? Furthermore, how do those doxers make their decisions? While the unethical nature of white nationalism is an easy assessment in a society that values equality, such moral judgements may not be as easy when doxing individuals supporting more ethically complex politics. Accordingly, if we enable doxing against white nationalists, should it be acceptable when targeting those voicing their support for abortion or doctors who support a patient’s right to die? How do we determine when doxing is acceptable and when it is not?

Is the anti-racist goal achieved by ignoring white nationalists’ privacy?

 Doxing intends to silence its victims, either by enabling a constant stream of threats, eradicating someone’s feeling of safety, or by shaming and creating negative consequences for voicing an opinion that is widely denounced. In the latter case, doxing operates as a form of deterrence as the subjects of doxing lose their social status and those who have similar opinions are given an opportunity to see what awaits them if they express themselves. This form of doxing seems to be more easily available to doxers targeting the alt-right given established campaigns which name and shame white nationalists online. Public shaming may sound like an ideal method of deterring hate speech, however, “terrifying” racists into silence may have unexpected consequences. Through censoring white nationalists, doxers may inadvertently add to white nationalists’ narratives about being unfairly silenced while simultaneously reducing opportunities to publicly debate and challenge racist ideas. That is, instead of directly challenging and discrediting racist ideas, doxing has the potential to push explicit racism into hiding in less popular online forums. The only time racist ideas will be expressed is when they can be sufficiently coded so as to avoid critique. We ask if this is an effective means of combating racism or if it is better to publicly debate and challenge freely expressed racist ideas?

Does the ineffectiveness of doxing give us reason to respect white nationalist’s privacy?

Doxing is often the work of online “mercenaries” who are prone to errors such as mistaken identity. For example, Kyle Quinn of the Engineering Research Centre University of Arkansas recently received a frenzy of online hatred after he was mistakenly identified as a member of the white nationalist protestors in Charlottesville who was photographed wearing an “Arkansas Engineering” shirt. We ask if the unregulated and error-prone nature of doxing gives us reason to respect the privacy of white nationalists? Further, if we support doxing, how can we ensure that cases of mistaken identity will be less likely?

How will doxing affect protests in the future?   

The three questions above need to be addressed with an eye towards emerging facial recognition technologies. These technologies are already grabbing the attention of privacy advocates with news of a Russian app that is able to match photographs of random people on the streets with their social media profile pictures. Employing facial recognition technologies, doxers will be able to avoid all the tedious manual labour of searching for their targets’ personal information. Instead, the faces in images from white nationalist marches could be searched within seconds and published online together with reliability scores of the matches. Advanced apps could algorithmically identify an individuals’ social network either to locate more white nationalists or perhaps warn a white nationalist’s network of their peer’s racist politics. Thus, facial recognition technologies have the potential to facilitate quicker and more efficient naming and shaming. Accordingly, we encourage a close examination of the implications of automated doxing and its implications for privacy in public spaces. We ask if anyone should be able to find out what protest or demonstrations others have been a part of, and for years to come? Would this make people hesitate to take part in protests? In addition, does doxing have the potential to undermine participation in protests and expressions of free speech? If so, should we be standing against doxing and standing for white nationalists’ privacy – if for no other reason than to defend the ideal of open protests and free expression.

Publications

Most recent publications

‘I’m Glad That Was On Camera’ by A Sandhu in Policing and Society 2017

This article presents the central findings of the Police on Camera project and offers details into the strategic orientation many research participants expressed when asked about policing on camera.

Camera-Friendly Policing by A Sandhu in Surveillance & Society 2016

This article argues that instead of engaging in counter-surveillance, police officers allow themselves to be recorded and engage in what I call “camera-friendly policing,” which involves efforts to control how they are perceived while video recorded.

Policing on Camera by A Sandhu and K Haggerty in Theoretical Criminology 2015

This article outlines the findings of one of the first studies examining how police understand and respond to cameras and photographers.

Other Publications

High-Visibility Policing by A Sandhu and K Haggerty in Oxford Handbooks Online 2015

This article analyses the situation surrounding police visibility and questions the extent to which videos of the police are producing uniformly negative outcomes for them

Private Eyes by A Sandhu and K Haggerty in Handbook of Private Security Studies 2015

This article analyses the extent to which videos of the police are producing uniformly negative outcomes for them. As co-authors, Kevin Haggerty and I shared all duties.

The Police’s Crisis of Visibility by K Haggerty and A Sandhu in IEEE Technology & Society 2014

This article discusses the increasingly fraught relationship police

Boxing

Much needed stress relief after a very tough few months. Haven’t boxed in years. Nothing calms my soul more. Everything else – all the essays, the reviews, the data collection, the editing, the lectures, the students – somehow fade into the background. All there is, is your fists and the target. In many ways, this is what got me through my last few years as a PhD student. It was one of the few ways I remembered to give my mind a rest.

Amnesty International’s Tanya O’Carroll on the ‘nothing to hide, nothing to fear’ argument

AmnestyInternationalLogoI recently interviewed Tanya O’Carroll, a Technology and Human Rights advisor at Amnesty International, to discuss government surveillance and its impact. I framed our discussion around the most common response researchers studying surveillance receive from the public: the “nothing to hide” argument. The nothing to hide argument alleges that government surveillance programs serve a security purpose and should not to be opposed by innocent people. This blog outlines O’Carroll thoughts about the nothing to hide argument and it’s flaws, the importance of privacy rights, and the ‘encryption mentality’ that she thinks should replace the nothing to hide argument.

Explaining the Nothing to Hide Perspective

The tone of O’Carroll’s work changed radically in June 2013, when Edward Snowden revealed the extent of the US and UK governments’ surveillance program. “I remember it literally almost like a war room straight afterwards,” O’Carroll recalled, “It was time to massively beef up our work, to ask ‘what are we doing to tackle technology as a human rights issue.’” Since then, O’Carroll’s work has attempted to raise awareness about the “dark side” of technology by examining how digital and online technologies in particular are used to conduct bulk surveillance.

Unfortunately, O’Carroll faces a significant barrier; the public have dismissed arguments exposing the harms of bulk surveillance and stated that they have “nothing to hide” and therefore “nothing to fear” from government monitoring. “That’s the first thing that most people say to us,” explained O’Carroll, “all of the comments over and over again are saying ‘yeah, okay, fine, it might be an abstract violation of this abstract right called the right to privacy, but, if I’m not doing anything wrong, why do I really care whose looking at my completely boring and mundane text messages and emails.’”

O’Carroll speculates that the nothing to hide perspective is born of a lack of information about how surveillance data is used to control. Most of us do not realise, she explained, that the employment opportunities we receive, the insurance prices we paythe treatment we receive from police, are increasingly the result of decisions made by algorithmic analyses of surveillance data. Most of us also do not realise, O’Carroll added, that these decisions can facilitate discriminatory hiring practices, exploitative pricing, and invasive police monitoring (see Weapons of Math Destruction by Cathy O’Neil) Even the political information we receive can be determined by algorithms which personalise the political news that appears on our websites. The result, “…doesn’t look like propaganda,” O’Carroll clarified, “it doesn’t look like a big democrat or republican or Brexit campaign poster on the side of the street. It looks like a post on your Facebook feed that is directly appealing to your tastes and interests.” Despite the seemingly harmless look, O’Carroll continued, these Facebook posts can have a significant influence on our political leanings and voting behaviour by creating “filter bubbles” that reinforce pre-existing biases and political polarisation.

Unfortunately, O’Carroll admitted, few consider such issues. There is a lot of “terrorism theatre,” she explained, which tells the public that regulations limiting bulk surveillance can undermine their safety and security. As a result, the public can be quite passive in the wake of government surveillance programs. This may also be a consequence of “…a failing of some us digital and privacy advocates,” O’Carroll added, “we’ve been so stuck in the sort of abstract or theoretical debate about privacy that we’ve failed to communicate its importance to people.”

The Importance of Privacy

Furthermore, when considering the value of privacy, O’Carroll added, it is important to remember the circumstances of those who face a disproportionate amount of surveillance. “The big eye in the sky is not aimed equally at everyone,” she explained, “I say this to my friends and my family every time the debate comes up. I defend [privacy rights] not just for myself. I defend [privacy rights] because there are other individuals who are unfairly treated as suspicious by the state.” The value of privacy then, can be found in how it serves those who are the most disadvantaged among us, those most likely be targeted by intrusive surveillance. To argue that surveillance is harmless and should be tolerated is a privileged position which ignores the experiences of the disadvantaged. “I think it is the time to put a battle cry out for privacy again,” O’Carroll concluded, “[…] it is the time for us to really stand up for the right to privacy.”

Encryption Mentalities

Standing up for the right to privacy can involve changing how we vote, joining pro-privacy protests, and/or writing to our local political representatives. However, it need not be so formal. Standing up for privacy rights can also involve changing our everyday behaviour by obstructing government surveillance. According to O’Carroll, this means developing a new mentality to replace the nothing to hide perspective.

To illustrate this, O’Carroll reflected on the mentality, which she later called an “encryption mentality,” that she’s developed since the Snowden revelations. She started by offering an analogy concerning how our attitudes about the safety of seatbelts have developed over time. “We’ve evolved to understand that if you walk out in front of a moving large piece of metal, also known as a bus, it is bad. So, you don’t want to do that. We didn’t necessarily evolve the mentality that when we are in a car, a seatbelt helps protect us. It’s a more abstract idea, because you can sit in a car and feel quite comfortable not wearing a seatbelt. We have to hammer it into our head, combined with law, that we have a better chance of surviving if we are wearing a seatbelt.” Overtime, O’Carroll explained, “people develop a feeling that when you get into a car, if you don’t wear your seatbelt, you feel exposed.”

Similarly, O’Carroll continued, she has developed a feeling of being exposed when she does not encrypt her emails or texts. “I have reached a point where I feel like I’m not wearing my seatbelt when I’m not encrypting things. I think I probably use end-to-end encryption for 20 to 30 percent of all of my communications now. It’s second nature for me to encrypt.” This is the mentality that she thinks should replace the nothing to hide argument. “I think that is where we need to get to as a society, so it becomes second nature, like wearing a helmet on a bike, or a seatbelt in your car. You don’t have to do it all the time, but you start to want to and you feel safer when you do.”

To help hammer home her message about the harms of surveillance and the importance of encryption, O’Carroll is working on a project which challenges governments’ claims that surveillance is not a human rights concern. “We are set to show that there is harm and that it is not just that the right to privacy is violated…it can also be discrimination. Not everybody is equal in the eyes of surveillance, and it disproportionately impacts certain communities. So, we are looking specifically at the impacts of mass surveillance programs in the police and security sectors and the impacts that those powers […] on already over-policed communities.”

You can find out more about Tanya O’Carroll’s work about the human rights concerns raised by surveillance and big data by following her on Twitter.

“People just don’t get it” an interview with Kade Crockford of the ACLU of Massachusetts about why surveillance issues aren’t getting the attention they deserve

blue_safe_free_lgbt_torch_edited-2_0The precarious state of privacy often fails to stir public attention. For example, the Investigatory Powers Act (IPA), a piece of legislation granting police and intelligence agencies sweeping surveillance powers in the UK, is said to have passed into law “with barely a whimper.” What explains this lukewarm response? How does the US install bulk surveillance programs like Total Information Awareness (TIA) or the UK pass privacy threatening bills like the IPA (sometimes called the “snooper’s charter”) without receiving the level of attention that one might expect from a society which claims to value privacy rights?

To help answer this question, I spoke to Kade Crockford, the director of the Technology for Liberty Program at the American Civil Liberties Union of Massachusetts (ACLUM). I spoke to Crockford because of her expert knowledge on issues related to privacy, security, and surveillance as well as her recent experience leading a campaign against the Boston Police Departments’ plan to buy social media spying software. Crockford played a central role in the pro-privacy advocacy which likely encouraged the Boston PD to scrap their plans. I thought that Crockford could offer insights into why surveillance practices aren’t earning a critical response and how to reverse this trend.

Early in our conversation, Crockford acknowledged that the public is quite passive in their response to surveillance bills and privacy violations. According to Crockford, this passivity comes in two parts: (1) fears of crime and terrorism and (2) a failure to realise the role that surveillance plays in our daily experiences and entitlements.

  1. Fear of crime and terrorism

When I asked her why the public doesn’t seem especially concerned about threats to their privacy, Crockford suggested that I compare those threats with other more “visceral” threats such as crime and terrorism. “It is really obvious to people that they don’t want to be murdered, because you can think about yourself dead on the ground or you don’t want to be robbed because you like your stuff, and its tangible and right in front of you.” These fears, Crockford argued, keep citizens preoccupied with crime and terrorism, and tolerant of surveillance which is often thought of as a tool used to keep them safe. “Nobody is going to object to [surveillance],” Crockford explained, “because [nobody] wants to be killed.”

Crockford was, unsurprisingly, critical of this fearful point of view. “The reality is that terrorism is exceedingly rare to the point where it is non-existent in the vast majority of places in this country […]. It is just an infinitesimally small risk of dying in a terrorist attack.” According to Crockford, this suggests that the affect, and perhaps purpose, of state surveillance has less to do with anti-terrorism and more to do with “what [law enforcement agencies] have always done, which is to harass and incarcerate black and brown people typically because of drug crimes.”

  1. The impact of surveillance on daily life

The second reason threats to privacy receive only a lukewarm response concerns a lack of awareness about the power and impact of knowledge-production through surveillance. “Most people who just go about their ordinary lives not really thinking about how much knowledge is power. It’s cliché, we say knowledge is power… but people don’t actually understand what that means… they don’t actually know what can happen when you amass [sensitive knowledge] about other people.”

To illustrate her point, Crockford referred to the passive attitude most people take to data about their online shopping. “People often say I don’t really care if Google monitors me because, yeah it’s a little weird when I go to this one website and I look for a pair of shoes and then I go to another website and the shoes have followed me all around the internet. Yeah, that’s a little weird, but it’s just shoes. What people don’t realise is that that same process that is used to collect information about what kinds of shoes you want and then target you with advertising, can also be used to narrow, constrict, and effectively control choices that you would be much more freaked out about.”

“For example,” Crockford continued, “political choices that you are making, choices that are determining what University you are going to, or if you are going to go to University instead of going into vocational training, choices related to what kinds of credit score you are going to get or whether or not you will be approved for a home mortgage loan. These are the kinds of things that people simply do not understand. These decisions are made by other people for them, based on information that is collected about them.” Crockford was referring to the way that surveillance data is used to categorise people, and grant or deny them access to opportunities ranging from the educational and occupational to the economic and political. For instance, our access to political information is increasingly shaped by algorithms which collect data about our online history and then influence our newsfeeds, which may have a serious impact on political opinion and democratic elections.

Despite the powerful role that data can play in integral parts of daily life, Crockford explains that most of us aren’t aware that any of this is going on: surveillance is conducted without the traditional, obvious trademarks of binoculars or cameras. Instead, our information is collected from a distance, without consent, and through digital processes. All we see is “what’s in front of you on Google.com.”

Crockford confirms many of the concerns that surveillance scholars have discussed in recent work about the developing surveillance societies around the world: the loss of privacy is becoming a norm, and lack of awareness about the resulting impacts means citizens are not voicing much concern. Education about the consequences about the benefits and risks of surveillance is much needed.

See: https://hrcessex.wordpress.com/2017/07/13/people-just-dont-get-it-an-interview-with-kade-crockford-of-the-aclu-of-massachusetts-about-why-surveillance-issues-arent-getting-the-attention-they-deserve/#more-1517

 

The Police’s Data Visibility Part 2: The Limitations of Data Visibility for Predicting Police Misconduct

In part 1 of this blog, I suggested that raising the police’s data visibility may improve opportunities to analyse and predict fatal force incidents. In doing so, data visibility may offer a solution to problems related to high numbers of fatal force incidents in the US. However, data visibility is not without limitations. Problems including the (un)willingness of data scientists and police organisations to cooperate and the (un)willingness of police organisations to institute changes based on the findings of data scientists’ work must be considered before optimistically declaring data visibility a solution to problems related to fatal force. In this blog, I discuss two addition limitations of data visibility, including low-quality data and low-quality responses to early intervention programs. Both are problems related to the prediction and intervention stages of using data to reduce fatal force incidents. Future blogs can discuss issues related to the earlier stages of using data to reduce fatal force incidents such as collection and storage of data about police work.

Low-quality data: As they are still relatively new technologies, it is hard to assess algorithms designed to predict fatal force. However, we can learn about the limitations of these algorithms by drawing on research about the police’s attempts to use algorithms to predict and pre-empt crime. “Predictive policing” adapts digital data about previous crimes to predict where crime is most likely to occur in the future and who is most likely to engage in criminal behaviour. Despite police departments recent and rapid adoption of predictive policing software such as PredPol, the effectiveness of predictive policing has been subject to critique for several reasons. Among these reasons are concerns about the low-accuracy of data fed to predictive policing software. This “input data” has been described as inaccurate and incomplete due to systemic biases in police work. For example, police officers’ tendency to focus on certain types of crime, certain types of spaces, and certain social groups, while leaving other crime and other spaces unaddressed, creates unrepresentative data suggesting problematic correlations between impoverished spaces, racial groups, and crime. When analysed by predictive software, this biased data is likely to produce predictions which contain both false positives and false negatives. Accordingly, if high-quantity but low-quality input data is used to predict fatal force incidents, similar problems may arise. For example, input data which is inaccurate, incomplete, or skewed (the result of police organisations’ failure to accurately document police work, especially use of force incidents) may result in inaccurate calculations from predictive software. These inaccurate predictions will then inform early intervention programs targeting low-risk officers and/or the neglect of high-risk officers.

Low-quality response: In addition to concerns about their ability to accurately identify high-risk officers, there are several concerns about the practicalities of early intervention programs. For instance, there are reasons to believe that, if high-risk officers are accurately identified by data scientists, interventions will not be taken seriously. A police department in New Orleans, for example, faced difficulties persuading its officers to take interventions seriously after they began to mock data collection efforts, and even considered being flagged as high-risk by an algorithm as a “badge of honour.” Some officers began to refer to interventions such as re-training programs as a “bad boy school” and saw inclusion as a matter of pride rather than something to be taken seriously. The problems with getting officers to take interventions seriously suggests that even if data scientists can construct an algorithm which accurately flags high-risk officers, there is no guarantee that ensuing attempts to improve police behaviour will be effective, especially if police are unwilling to accept interventions. Furthermore, even if officers do not ridicule interventions, there is no guarantee that interventions will receive the support from police organisations that may be required. For example, studies show that interventions often suffer from administrative neglect, delays, and can be error-ridden and sloppy, leading to a failure to transform the organizational and social culture of a police department.

Conclusion

By raising police officers’ data visibility, police organisations, with the help of data scientists, can engaging in comprehensive analysis of fatal force incidents, and produce programs designed to identify high-risk officers and successfully intervene through re-training, counselling, or substantive changes to use of force policy. However, several unknowns play a key role in determining the implications of data visibility and predictive analytics including the inclusion/exclusion of data, false positives/negatives, and social forces which determine if interventions will be taken seriously by officers. Each of these unknowns requires detailed study before trying to walk a logical pathway from data visibility to a reduction in fatal force incidents.

Originally posted via the HRBDT Blog.