Back to All Events

Fairness, Biases and Transparency in Algorithms

  • Technical University Vienna (map)

As Bruno Latour reminds us tools are not apolitical. Engineers always ran the risk of reproducing societal biases and inequalities through their inventions. For example, early color film by Kodak was optimized for whiter skin tones and was inherently incapable of capturing darker skin tones .

What has changed in recent years is that we as a society started to rely on ever more complex algorithms to not only classify data but to act upon results gained from data. Algorithms now routinely write news articles, trade on the stock market, judge loan applications, and assess future re-offending rates of in criminal trials. Algorithmic decision making became enmeshed into daily life.

While it is relatively easy to see that in the above example the results of taking pictures with the Kodak film were racially biased, the question of assessing fairness becomes much harder when dealing with complex machine learning systems as biases are most often an emergent property of the algorithm rather than a conscious choice by its programmers . Naively excluding sensitive attributes from the data that is used to learn the machine learning model might even make the problem worse by making it harder to detect biases .

And usually those building the algorithms are not trained in law or the social sciences, while experts in discrimination law do not know how to audit modern machine learning algorithms. Further complicating matters is that even experts in computer science and mathematics often struggle with interpreting the output of many modern machine learning algorithms. Unsurprisingly assessing and guaranteeing fairness and transparency in machine learning is a wide open research field.

This begs the question, what can we do today to better understand algorithmic fairness and transparency? What can we do to build fairer algorithms with the knowledge we have now? What policies and legal frameworks are necessary to ensure an equitable outcome of applying machine learning?

The workshop will focus on specific domains of application where algorithms are applied in institutional settings, discussing issues of fairness, bias and transparency as they emerge along the value chain from development to implementation from the perspective of social and computer science. The goal is to raise awareness among technical experts for the external effects of their work and help social science scholars to better understand the complexities and constraints of algorithm development.

The individual contributions should take the state of (critical) discussion in their fields as a starting point and move towards the identification of shared problem descriptions and common ground for future debate and research questions to be addressed.

We think that problems such as ownership and quality of (learning and testing) data, transparency of computational processes, testing procedures for bias, conflicts between IPR and GDPR are of academic and political relevance for social and computer scientists alike. Approaching such problems across disciplinary boundaries will hopefully yield a better common understanding for both disciplines.

Conference Program will follow soon here.