From citoyen to techno-social hybrid – surveillance in modern societies

Title: From citoyen to techno-social hybrid – surveillance in modern societies

Speaker: Reinhard Kreissl

Keynote: GERN workshop Usages des nouvelles technologies dans les domaines de la sécurité et de la justice pénale

Date: December 2016


What I would like to do in my introductory remarks is look at a number of different areas or sites, where different aspects of the concept of surveillance can be investigated and take a brief look at the social, economic and cultural processes providing the basis for the growth of different forms and types of surveillance in contemporary societies in a broader context. I will remain at the conceptual level, using exemplary cases to demonstrate my point.

Surveillance is a rather vague term, such as security or privacy. These two terms are frequently introduced to understand surveillance and support conflicting interpretations or normative positions of surveillance practices. We all are familiar with the debate of security vs. privacy, that always gains momentum when new threats to security are identified. Such new threats are used to justify new and more surveillance and we are told the price for keeping up security is to give up privacy. This trade-off kind of thinking: sacrifice or swap privacy for security is a dead-end.

I think it is important to look at surveillance from a number of different angles in different areas, highlighting different aspects of this fuzzy concept. Surveillance as we see it today as a pervasive feature is not only about “security” in the narrow sense of law enforcement and policing. It comes in different shapes and sizes, has different drivers and produces different dynamics and different effects – and the unintended effects often outweigh the intended effects. Also, surveillance can be analysed from different positions. Surveillance from the point of view of the watcher is different from the perspective held by the watched. It can be seen as a unidirectional or a bi-directional process (such as in cyber attackers and their counterparts). Surveillance can be tied into and reinforce existing social hierarchies and power relations, but it can also happen in egalitarian social arrangements. So, we should not only consider surveillance, but also sousveillance and interveillance. (loading up on You Tube videos of police brutality/ spouses checking E-mails or mobile phone records of their partners)

What all these different forms have in common, and what I think justifies the use of the umbrella term surveillance is the fact that they are linked to projects of governance and control, albeit in quite different ways.

1.

Simple and prominent cases of surveillance as a targeted strategy can be found in the realm of law enforcement. Here one might look at the well-rehearsed example of CCTV cameras in public space, the electronic eye, as Clive Norris called it. This is surveillance in the tradition of the Panopticon. The world is watched, surveilled and monitored from the centre of control, based on the fantasy of the god’s eye perspective. The question then is: who is surveilled? How and for what purpose? Is surveillance targeting a specific individual in real time, or is the object of surveillance a space, a location? And what is the operational logic of the practice?

Targeting and tracking a defined person, or screening a setting, a space, a process for anomalies and deviations creates different surveillance arrangements.

Observing and watching from a remote position to gain information about an object/person looks like a simple and straightforward approach to surveillance. But this approach produces massive amounts of data and these data have to be processed in real time to be effective.

Much research money is spent to find ways how this processing task can be transferred to machines. The idea driving this research is to use pattern recognition or NLP technologies, searching e.g. for catch words in electronic communication, or for deviations form standardized visual patterns and frequencies to automatically detect relevant events deserving the attention of the central authorities of control. The promise of panoptic surveillance is to alert law enforcement authorities towards critical situations and human suspects.

Now surveillance technology may dramatically expand the realm of what can be seen, heard, tracked or identified, and in the future, automated detection may turn out to be a solution for the pressing problem of data overload. But these surveillance technologies do not provide new means for intervening in a given situation. Take again the case of CCTV in public space. While relevant events may be captured on camera and probably even identified as such in time by the remote observer, the camera cannot intervene and despite entertaining the fantasy of taking god’s eye perspective, the capabilities to act on the basis of what is seen or heard, are definitely not divine.

This problem now is widely acknowledged also among the supporters of surveillance and so they have shifted the focus of their arguments from prevention and intervention in real time to prosecution, pointing out that surveillance is producing evidence for police and prosecutors after the event. As a practical strategy to intervene in real time this approach is limited by very trivial resource problems. Surveillance may help to detect more situations requiring intervention, but available resources limit the number of situations that can be addressed at a given time. The urban eye lacks the enforcing urban feet and hands. (This problem also emerges, when law enforcement try to activate citizens in the co-creation of surveillance, such as e.g. in so-called community policing programmes.)

1.b

When surveillance data are put to investigative use, there are some success stories. Suspects were identified by law enforcement screening e.g. video footage from a crime scene. But the more elaborate forms of using surveillance data e.g. in dragnet operations produce a different type of problems and new limitations emerges. Here we encounter the problem of the needle in the haystack.

Dragnets introduce a strategy of surveillance that is different from the simple version. In the simple version, the starting point typically is an explicitly identified individual (terrorist suspect) and surveillance technology then is applied to track and observe the actions and movements of such individuals in space (tracking their phones, their use of credit cards, etc.). Or the other variant: surveillance helps to identify in real time a person in an environment under observation deserving special interest and closer scrutiny, like e.g. when a camera, installed in a train captures an act of vandalism and the police are dispatched to arrest this person. 

The logic of the dragnet as surveillance practice is different. The starting point here is not the identified individual but a set of hypotheses and assumptions, defining a group or a type of person with specific features. Searching surveillance data for individuals displaying these features on a nationwide basis is a common practice, but in many cases, produces a significant number of false positives. The NSA for example was applying search procedures to identify supporters of Islamist terrorist movements living in the US. They were looking at travel patterns, financial flows, religious affiliations, residential status, communication meta-data and a number of other features, only to find out that several hundred thousand of persons, living in the US would qualify as potentials terrorist supporters. The operation, processing massive amounts of data not only violated basic principles of privacy and data protection but also proved not very successful. Keeping this group under close supervision and surveillance or putting them on no-fly lists was simply beyond the practical capabilities of the security forces.

Looking at the publicly available evidence, the results of this more elaborate kind of surveillance, despite the techno-Sci-Fi appeal, in general are rather sobering. Problems not only have to be addressed upstream when analysing the data, but also downstream when the results of such analyses are put to operational use in a law enforcement context. Also, it is difficult to assess systematically the effectiveness of these strategies, since the security institutions are very reluctant to release information about their operations. But there are some indicative cases, demonstrating the type of problems occurring downstream. Recently it was revealed that intelligence about the Paris attackers had been available in the Belgian police, and was not used. In Germany, the most famous case is the kidnapping of the industrial leader Hans-Martin Schleyer by a commando of RAF in September 1977. A dragnet operation was launched to narrow down the number of places where the hostage might be kept by the kidnappers. Horst Herold, at this time the director of German Police the BKA believed in modern data processing as the gold standard of policing and so he ordered to screen different public data bases to locate the kidnappers who intended to free their comrades from prison in exchange for Schleyer. The plan failed and Schleyer was killed by the RAF. As it turned out after the event the dragnet operation had been successful and had produced a hit, the apartment where the RAF kept their hostage was found on a list produced in the dragnet operation. But since the whole security apparatus during this crisis situation was in turmoil and chaos ruled, the information was sitting on the desk of one of the police officers and remained useless.

A somewhat different situation emerged after the 9/11 attacks. American security services were heavily criticised for not having stepped in to prevent this attack. I think it was Madeleine Albright or Richard Allen who told this story. At the daily morning briefing with the President the different security agencies present an updated threat assessment. They compile intelligence gathered around the globe presented in a 5-minute policy brief. This highly condensed synthesis of surveillance data from different sources is supposed to inform the President about upcoming threats. In hindsight, it turned out that information about the attackers had been buried somewhere in the haystack of intelligence, but had not been flagged properly and in time. Upset about the critique brought forward against their performance, the heads of the security services suggested to send every morning a couple of computer disks with daily updated comprehensive intelligence to the White House and then leave it to the National Security advisers to crawl through gigabytes of information to decide about the course of action.

Where does this leave us? Panoptic surveillance – the real-time, technology-based, data-gathering watching, listening, detecting, tracking of objects and persons – can widen the gaze of a central observer, but the more it spreads, the more the panoptic centre falls short to act on the intelligence produced. Screening the large data sets produced by this kind of surveillance for the preventive identification of threats and potential wrong doers runs into problems of finding the proper algorithms for selection and applying the right filters.

Although massive panoptic surveillance in most cases is inefficient it is of course not without effects on individuals and society and the internal workings of the intelligence and law enforcement apparatus. (Security theatre!)

2.

A different approach to surveillance, to be found mostly outside law enforcement contexts can be analysed when looking at surveillance strategies, designed to govern different types of flows and socio-cultural routines. This form of surveillance-based governance can be performed in different ways.

The range of choices for different paths of action a person has in a given situation can be shaped and narrowed down through remote or automated control, backed by surveillance loops. The classical, often cited, and simplest case is the one-to-one interaction between a client and an ATM, where the machine governs the process how a human gains access to cash supply. Starting the process by inserting a cash card, the machine compares information on this card with information stored in a remote data centre, to identify the customer as legitimate user. Following a sequence of predefined steps, e.g. punching in the proper PIN code on the keyboard and pressing the number keys to determine the amount of cash requested, triggers an automated check of the amount at disposal in the bank’s computer. When this check yields a positive result, the machine releases the requested bills. No negotiation or deviation is foreseen in this routine procedure. It is governed at a distance by a process that creates a myriad of data about the user whose range of action is determined by the technological arrangement of the ATM providing the interface to a central financial hub. (Other examples: swipe cards as door opener, governing movement in space)

2.b

Another strategy, following a different logic and where surveillance is applied to governing flows and human actions at a distance is the algorithmic management and allocation of dispersed resources in a market environment. ICT-based work environments provide good examples here. In its most advanced and scary version this surveillance based management of flows can be studied in a new form of work that has been labelled the “gig economy”. Companies like Uber or Amazon are offering transportation or delivery services to customers executed by a work force consisting of formally self-employed individuals. Using a complex algorithmic regime, such business models create revenue by linking demand and supply and collecting a fee every time they connect a client to a service. The staff working for companies like Uber legally are self-employed and are offering the company their service as independent drivers. To manage this business relation between these two formally independent parties, communication via Apps and mobile phones is used.

Comparing this system with the governance of traditional taxi services in urban areas, the differences are easily spotted. Taxis either are hailed by customers on the street or the driver connects to a radio-system operated from a central office sending out requests. When a client stops a taxi, no external actor is involved.

When a request is transmitted via the radio system, it is addressed to all cabs cruising the city. Each driver can respond and the dispatcher then assigns the ride to the closest cab.

Under the new Uber-type regime, drivers have to wait to be individually addressed by the central dispatcher system on a one-to-one basis to be assigned a job. Their cars are not visibly marked as taxi service, so no immediate contact with potential customers is foreseen. The communication to coordinate the service is limited to targeted messages the central system sends out to individual drivers. This system uses an algorithmic ranking based on unknown performance indicators, so that some drivers can get preferential treatment when jobs come up. Applying such a regime of selective assignment and using different thresholds for individual workers the companies despite a high turn-over of the drivers build up and maintain a work force meeting standards of performance defined by the management. Companies like Uber or Amazon provide mobility in the widest sense. They manage flows of individuals and goods on the basis of a highly fluid infrastructure that utilises surveillance practices to govern a large number of individuals. These individuals are working for the companies under computer-based surveillance to satisfy volatile needs emerging in modern, mobile consumer societies.

Panoptic surveillance is designed to identify, track or locate specific individuals. The surveillance strategies supporting new forms of exploitation like the gig economy use surveillance to constantly monitor the performance of a work force and manage the flow or circulation of persons according to volatile market-driven and company defined needs. The primary objective of this form of surveillance is not exclusion or prevention but maintenance of a dissipative equilibrium state through a myriad of micro decisions governed by algorithms (processing surveillance data). While this form of surveillance of course does have massive exclusionary effects, exclusion is not its main purpose.  

3.

A third way of looking at surveillance focuses on processes of social sorting and could be addressed under the heading of surveillance as a side effect. Prominent examples for social sorting are credit scoring or strategies of so-called customer relation management and targeted marketing. There is a certain overlap with the panoptic surveillance since this surveillance strategy targets the whole of society and also operates with crisp categories sorting individuals based on different predefined features. But the overall objective is not exclusion based on threat and danger, but to assign individuals to categories of consumption and relate to them in specifically targeted ways.  Social sorting began to take momentum with the exponential growth of person related facilitated through the introduction of advanced data-processing technology and the rise of electronic consumerism under the regime of electronically mediated communication.

Citizens under this regime are seen as leaking data-containers, leaving continuous traces of information. With every step and move they make and take they contribute to the production of comprehensive data-doubles. Such data traces are collected by a large number of public and private organisations, exchanging, trading and processing these data in most cases without knowledge of the data subject, using undisclosed algorithmic processes to adopt and refine organisational and marketing strategies.

In principal legal provisions require individual consent for such data collections and processing. National and European law makers are working hard to provide for adequate protection of citizens’ private sphere in a regulatory framework, compliant with fundamental rights. (Kosinski / Stillwell)

But the problem is: As member of this society you are trapped. In order to manage daily life in contemporary societies citizens are forced to link to a myriad of data streams. Using computers linked to the Internet, mobile phones, electronic cash, travelling on planes, trains and cars produces information of the user. And before entering into the first transaction with Google, Facebook, Amazon, mobile phone providers, car rental services, airlines, booking websites, Credit Card providers, users have to accept the terms and conditions provided by these companies. And who has ever read (or understood) these regulations? Everyone accepts by clicking the box. And when we don’t klick on the “agree button” access to services and information is denied. – No mobile phone, credit card or holiday trip!

Add here medical services providers, local and state bureaucracies, collecting person-related information and the idea of a data double, cloning an individual in the virtual sphere seems to be a valid description of the present state of affairs.

Surveillance from this angle can be considered as a side effect since the practices and strategies applied are based on the analysis of person related data collected for a wide variety of very different purpose in a variety of contexts unrelated to surveillance.

Now the interesting point is: Compared to public debates and concern about surveillance practices implemented in the context of LEA, the general public or typical users often see no obvious disadvantages here, no serious concerns about privacy and security are raised. One important motive for the wide acceptance of this risk that person related data can be used for massive, invasive surveillance seems to be the convenience that comes with electronic consumerism: the seduction of easy access to services, one stop shopping in the Internet, cash bonus on loyalty cards outweighs any critical second thoughts. Although according to recent surveys privacy and security concerns are growing among customers in the area of electronic consumerism, the spread of new gadgets that allow for new areas of data collection and remote control – the so-called Internet of Things – is still tremendous, showing significant growth rates.

The rationale of this form of surveillance is best captured in the image of the data double. Citizens providing personal data and constantly leaking information about their movements, having even their medical status documented in a machine-readable format actively and more or less deliberately contribute to the construction of a virtual twin. This twin is defined by properties relevant for consumer marketing: personal habits and preferences, financial status and credit history, political, religious and sexual orientation, general psychological profile etc. define this data double.

Two interrelated strategies are pursued here: on the one hand, an individual person may be addressed using the information read off the virtual twin. This creates a strong asymmetrical situation, where the targeted individual has no information, but the person or agency approaching this individual may know a lot about this specific person. (Gay marriage and election campaign) On the other hand, analysing and comparing such data doubles on a large scale using a kind of survey approach new categories are developed to be applied for the grouping of real world persons. The general logic of surveillance operative here helps the company or institution to decide what kind of services or goods will be provided or refused to an individual.

(Walmart! electronic cards for food stamps)

Looking at the societal effects of this form of surveillance two long term developments should be emphasized: on the one hand under the regime of governing social action a growing number of daily routines and decisions are transferred into the virtual sphere or executed by machines. This affects the range of human agency and the mastery/acquisition of cultural competences. Leaving shopping decisions to a smart kitchen or washing machine or relying on GPS for navigation in an unfamiliar environment or activating the auto-correct and spelling function in a text program may seem convenient but also entails a form disempowerment of the human actor. As with surveillance technologies devaluating human capabilities acquired in the course of human evolution to identify and react to threats and dangerous situations, the transfer of routines to machines and algorithms, constantly monitoring the human agent devaluates the repertoire of culturally transmitted knowledge and competence.

On the other hand, basic elements of citizenship and the social contract are eroding as a consequence of comprehensive social sorting strategies. A good example to demonstrate the latter point is the recent trend of insurance companies to offer individually tailored contractual arrangements to their customers. The premiums for car insurance then will be calculated individually every three months, using data collected by a GPS-powered device installed in the client’s the car. This device collects data about the use of the vehicle, distances and routes driven, average speed. Processing these data, a computer calculates the individual monthly insurance rate, based on a risk assessments developed for different groups of socially sorted clients.  Similar trends can be observed in health insurance, where risk categories, based on medical information collected increasingly through remote control of all kinds of health and fitness gadgets are used for social sorting of individuals. These processes erode the basic principle of insurance, that assumes an evenly distributed risk across all individuals paying the same premium securing mutual protection.

In a broader sense, massive social sorting also can threaten the foundations of a democratic polity and the public sphere. A significant part of public discourse and political deliberation has moved into the virtual sphere. Early optimistic scenarios conceived the Internet as a new way to widely spread knowledge and provide the basis for new forms of decentralized, networked political action. But since the electronically mediated communication flows are centrally governed by global companies, users may be exposed selectively to targeted information, strategically shaping their view of a specific politically relevant problem or their overall world view. (echo chambers, more than 50 % of Internet traffic produced by bots).

4.

Now all of the three approaches to surveillance – panoptic control, governing of flows and human action in a market environment and social sorting – combine and overlap in modern societies. And for each there are substantial economic drivers, pushing the trend towards an extension and expansion of surveillance. Companies producing new technologies, applicable for surveillance purposes, but also research and development focusing on new security technologies are booming sectors in science and business (20% growth rate/FP7). The clients here come primarily from the public sector, where expenditures in the security sector, budgets for intelligence services, police and the fight against the newly discovered cyber threats to critical infrastructures have soared.

In the private sector the increase of commercially available person-related data fuels the hype about Big Data and related new approaches like Deep Learning (AI). The emerging Internet of Things opens new profitable opportunities for providers and producers of soft and hardware. All these areas where surveillance comes into play are considered hot fields for investors looking for high returns. Data, so the popular slogan is the new gold, producing a gold rush in the virtual sphere.

While the economic drivers and the different underlying business models have to be analysed and considered to understand the impact, spread and development of surveillance in contemporary societies, each type displays its own logic.

Panoptic surveillance as I tried to argue is still spreading and extended cutting deeper into all areas of society but it does not produce the intended effects. Police, national intelligence agencies and security services operate in the frame of a disciplinary society and still have to come to an understanding of their role and proper strategy in control societies. The idea of a technologically upgraded central scrutinizer controlling law abiding behaviour, preventing criminal acts and identifying law breakers still guides law enforcement. Expanding panoptic surveillance based on technology to produce more data about individuals is a self-defeating strategy. The paradoxical effect of this strategy is that more and presumably better data yield less intelligence. While the smarter guys in the intelligence community start to understand this problem, the operational strategies and policies of the institutions are lagging behind.

Substantial effects can be found, when looking at surveillance as functional and important element in strategies governing flows, or guiding and moulding human action. Surveillance based on new technologies enables advanced forms to govern economic processes emerging in the growing field of mobility or circulation services. As the gig economy example demonstrates this form of technology-based surveillance supports new forms of exploitation and control of the work force in traditional and advanced business models (work place surveillance and KPI). At the level of mundane, everyday world routine behaviour surveillance is an important element in what could be called the governance of the normal. Human behaviour and movement are monitored and assessed against a standard of (system defined) normality, using technological assemblages combining surveillance and material technology, soft- and hardware. The relational structure linking the parties in these surveillance arrangements is centralized allowing for governance from local or global hubs. Governance by remote control of dispersed individual agents is performed by the central processing unit on the basis of a one-to-one exchange. Horizontal communication and exchange is neither foreseen, nor necessary, neither supported nor enforced.

The most interesting and as regards its effects probably most pervasive and complex case is social sorting.  Here surveillance emerges as a side effect of massive generation of digital person related data feeding into machine based analysis and paving the way to a society where citizens are constantly and comprehensively monitored and assigned a specific status based on social sorting. This status defines access and exclusion to services, information and places. Decisions about the status are based on algorithmic calculation and remain opaque for the affected individual.  What can be seen and heard, what is believed and desired increasingly depends on social sorting procedures.

Human agency in this context is undergoing a change towards increased dependency on abstract systems, expert knowledge and electronically mediated forms of exchange. And access to such systems and knowledge can be controlled and selectively enabled by information providers.

Citizens become techno-social hybrids hooked to a networked system and this also affects human capacities to relate to the world. Assuming this is a valid diagnosis one might ask whether and how categories of social analysis and critique are still applicable to understand contemporary western societies. (individual/private/public/truth, etc.) I briefly touched upon the widely-discussed effects of social media on democratic procedures and institutions or on normative ideas and ideals of citizens conceived as equal and equally informed to show how the social contract can be undermined by surveillance based governance.

Surveillance based governance not only affects the market and the capabilities of the state but also has an impact on the very infrastructure shaping the societal relations of communication, constituting the basis for social coordination and interaction. This infrastructure develops and changes over time in co-evolution with technologies of communication, from Guttenberg to Zuckerberg. And it shapes and forms individual and collective reasoning and action alike. At the individual level a trend towards devaluation and disempowerment can be observed. With regard to the societal level the core question is: How can a society or a group by deliberation develop a collective representation in the age of Facebook? And how are such deliberative processes organized and how susceptible are they to targeted manipulation?

Privacy scholars have observed a chilling effect of surveillance, destroying the dialectical dynamic of private and public. Citizens having to fear to be constantly monitored will lose the courage to engage in activities that could be used against them because they are recorded by a central authority. But I think there is more fundamental problem here: The increasing fragmentation of society into isolated bubbles and communities, no longer reproducing via a shared social or communal praxis but loosely connected and reproducing primarily via centrally governed social media networks destroys the very fabric of this society.

So, in a way one might conclude that surveillance has become ubiquitous and has spread out due to the evolution of technologies beginning to reshape the realm of the social in fundamental ways. Now whether this should be seen as a dystopian trajectory or not remains an open question and is subject to further inquiry. I think so far we have not yet found a good and forward looking answer to the question of what it means to live in a surveillance society.