Black Box by Algorithmic Governance Research Network

Algorithmic Governance Research Network

Technology is not neutral, it is political. How do we understand the algorithmic restructuring of relations of power, governance, organization, and ordering of social life? Join Tereza Østbø Kuldova in a series of conversations with prominent scholars on the algorithmic world, discussing topics such as work and labour rights, security, democracy and justice, as well as the consequences of datafication of knowledge and beyond. read less
Society & CultureSociety & Culture

Episodes

Episode 2: Conversation with Simon Egbert and Matthias Leese on Criminal Futures: Predictive Policing and Everyday Police Work
09-04-2021
Episode 2: Conversation with Simon Egbert and Matthias Leese on Criminal Futures: Predictive Policing and Everyday Police Work
Joining me today are Simon Egbert, Postdoctoral Fellow at Bielefeld University, on an ERC research project on The Future of Prediction, and Matthias Leese, Senior Researcher at the Center for Security Studies (CSS) in Zürich, to discuss their recent book Criminal Futures: Predictive Policing and Everyday Police Work, published in 2021 with Routledge. The book is available to download open-access here. Today we discuss predictive policing and the ways in which it is transforming police work. Police departments across the globe are embracing algorithmic techniques to support decision-making through risk assessments and predictions based on big data and real-time analytics, utilizing tools such as facial recognition. Silicon Valley’s ‘technological solutionism’, to use Evgeny Morozov’s concept, has been making its way into law enforcement agencies across the globe, promising to smoothly, efficiently and effortlessly anticipate, predict, and control (future) criminal behaviour and deviance. But predictive policing has met with resistance from civil society and academics alike. Even though data-driven predictions and algorithmic risk assessments are sold by tech developers as ‘neutral’ and ‘objective’ forms of ‘evidence’ and ‘intelligence’ – because technological – as something ‘solid’ and ‘hard’ in ‘liquid times,’ critical social scientists tend to know better. What counts as data and how it is collected, what is included and what excluded, all this reflects historical, representational, cultural, gender, and other inequalities and biases. Prejudices about criminality of certain groups can be built into crime data, resulting in their reinforcement, rather than dispelling. We increasingly read about systems trained on biased and ‘dirty’ data, about ‘rogue algorithms’, and ‘algorithmic injustice’ and violations of human rights and civil liberties. As Cathy O’Neil put it, algorithms can create ‘a pernicious feedback loop’, where ‘policing itself spawns new data, which justifies more policing’ (O’Neil 2016: 87). Last year, acting on these insights, the city of Santa Cruz in California, one of the earlierst adopters of predictive policing, became the first US city to ban the use of predictive technologies in policing. Calls for ethical, transparent and explainable AI are emerging both from within computer science, law and social sciences, and from policymakers and civil society. It is clear that both the development and adoption of these technologies does not happen in a cultural, political or economic vacuum. In many countries, for instance, police forces are experiencing financial cuts, increasing pressures to outsource certain tasks to private actors, often accompanied by organizational reform. Demands on response time, results, performance, and efficiency are increasing, while resources may be shrinking, thus structurally creating a market for a wide range of optimization tools for police work. Simon Egbert and Matthias Leese have studied predictive policing, the datafication of security and the transformation of police work ethnographically in Germany and Switzerland. In this podcast, we discuss in detail the reality behind the sleek commercials for predictive policing software tools that promise to forecast crime and control futures. Are we headed towards a dystopian society of total surveillance, social sorting, and control or a utopia of a perfectly optimized police force? What futures lie ahead for predictive policing and what will the police force of the future look like? Text © Tereza Østbø Kuldova, 2021 Produced with the financial support of The Research Council of Norway under project no. 313626 – Algorithmic Governance and Cultures of Policing: Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa (AGOPOL).
Episode 1: Conversation with Ignas Kalpokas on Algorithmic Governance and the Futures of Politics and Law in the Post-Human Era
31-03-2021
Episode 1: Conversation with Ignas Kalpokas on Algorithmic Governance and the Futures of Politics and Law in the Post-Human Era
Joining me today is Ignas Kalpokas, Associate Professor at the Department of Public Communication at the Vytautas Magnus University in Lithuania, to discuss his recent book Algorithmic Governance: Politics and Law in the Post-Human Era, published in 2019 with Palgrave Macmillan. Algorithms govern our everyday lives in a myriad of ways. From determining what information and news we see, which commercials target us, to deciding if we are creditworthy, whether we get a chance to be hired, or, as in the case of many platform workers, whether we get fired. We are rated, and we rate others, we build behavioral nudges into our technologies, to optimize the Other, and we keep tracking and optimizing ourselves. Big data, with their promise of data-driven and real-time decision making, are transforming the ways in which we govern our societies and ourselves. Surveillance, monitoring and self-policing, have become ubiquitous. Shoshana Zuboff speaks of the age of surveillance capitalism, where data is the new oil and where futures are monetized. Cathy O´Neill has been warning of the dangers of these “weapons of math destruction” built on ‘dirty data’ and driven by concrete political and economic interests, pointing to the consequences of their mass deployment, such as the emergence of new forms of inequalities and reinforcement of old ones, and proliferation of new forms of algorithmic injustice. The covid-19 pandemic has been managed by governments worldwide through often extreme, ad hoc, and unpredictable measures, further revealing the underlying logic of algorithmic governance, where techno-optimism merges with technocracy. Relying on real-time data, statistical modelling of possible futures, and on the seductive aesthetics of purity and simplicity of the endless graphs, numbers and risk predictions, new regulations and prohibitions are imposed and others withdrawn on a continual basis. As I see it, this form of governance profoundly unsettles our traditional conceptions of the law and legal frameworks. It results in law that is personalized – much like a targeted commercial – granting access to certain goods for some, while restricting access to others and a legal landscape that is dynamic, and hence to far greater degree unpredictable. This creates further existential insecurities and anxieties in a world, rather paradoxically, hyper-focused on security and securitization of all there is, including health. In my view, this results in a short-term technocratic evidence-based and expert-driven politics – or rather the absence of politics proper. Ignas, your book unpacks precisely this underlying logic of algorithmic governance, analyzing the meanings of politics and law in what you deem the post-human era. In this conversation, Ignas and me discuss the logic of algorithms and the ways in which they are transforming politics and law, resulting in a hybridization of governance.   “Algorithmic governance—the increasingly prevalent form of governance in this digital world—is characterised by its tackling of problems through ‘their effects rather than their causation’: Instead of disentangling the multiplicity of causal relationships and getting to the root of every matter, this form of governance is intent on collecting as much data as possible in order to establish robust correlations; in other words, instead of decoding underlying essences, this mode of governance works by way of establishing connections, patterns, and, no less crucially, predictions. … This attitude that prides itself on replacing causes with trends also has the effect of altering the place of human persons, effectively objectifying and commodifying them, turning them into data generators where the data footprint is all that matters and is taken for the person.” (Kalpokas 2019: 2). Kalpokas, Ignas. 2019. Algorithmic Governance: Politics and Law in the Post-Human Era. Cham: Palgrave Macmillan. Text © Tereza Østbø Kuldova, 2021 Produced with the financial support of The Research Council of Norway under project no. 313626 – Algorithmic Governance and Cultures of Policing: Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa (AGOPOL).