Conference Content

Learn more about who, what, when, and where.
2 of 4

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4

Live Content

On these streams you will find live content happening during the conference.
4 of 4

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
← Back to Agenda
Add to Calendar 3/9/21 14:00 3/9/21 15:45 UTC Paper Session 13 Check out this session on the FAccT Hub.
Track One

Paper Session 13

Session Chair:
Luke Stark
No items found.


Ask a Question?
Join the Conversation

Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy

Amanda Coston, Neel Guha, Derek Ouyang, Lisa Lu, Alexandra Chouldechova, Daniel E. Ho
View Paper


Anonymized smartphone-based mobility data has been widely adopted in devising and evaluating COVID-19 response strategies such as the targeting of public health resources. Yet little attention has been paid to measurement validity and demographic bias, due in part to the lack of documentation about which users are represented as well as the challenge of obtaining ground truth data on unique visits and demographics. We illustrate how linking large-scale administrative data can enable auditing mobility data for bias in the absence of demographic information and ground truth labels. More precisely, we show that linking voter roll data---containing individual-level voter turnout for specific voting locations along with race and age---can facilitate the construction of rigorous bias and reliability tests. Using data from North Carolina's 2018 general election, these tests illuminate a sampling bias that is particularly noteworthy in the pandemic context: older and non-white voters are less likely to be captured by mobility data. We show that allocating public health resources based on such mobility data could disproportionately harm high-risk elderly and minority groups.

Designing Accountable Systems

Severin Kacianka and Alexander Pretschner
View Paper


Accountability is an often called for property of technical systems. It is a requirement for algorithmic decision systems, autonomous cyber-physical systems, and for software systems in general. As a concept, accountability goes back to the early history of Liberalism and is suggested as a tool to limit the use of power. This long history has also given us many, often slightly differing, definitions of accountability. The problem that software developers now face is to understand what accountability means for their systems and how to reflect it in a system's design. To enable the rigorous study of accountability in a system, we need models that are suitable for capturing such a varied concept. In this paper, we present a method to express and compare different definitions of accountability using Structural Causal Models. We show how these models can be used to evaluate a system's design and present a small use case based on an autonomous car.

TILT: A GDPR-Aligned Transparency Information Language and Toolkit for Practical Privacy Engineering

Elias Grünewald, Frank Pallas
View Paper


In this paper, we present TILT, a transparency information language and toolkit explicitly designed to represent and process transparency information in line with the requirements of the GDPR and allowing for a more automated and adaptive use of such information than established, legalese data protection policies do. We provide a detailed analysis of transparency obligations from the GDPR to identify the expressiveness required for a formal transparency language intended to meet respective legal requirements. In addition, we identify a set of further, non-functional requirements that need to be met to foster practical adoption in real-world (web) information systems engineering. On this basis, we specify our formal language and present a respective, fully implemented toolkit around it. We then evaluate the practical applicability of our language and toolkit and demonstrate the additional prospects it unlocks through two different use cases: a) the inter-organizational analysis of personal data-related practices allowing, for instance, to uncover data sharing networks based on explicitly announced transparency information and b) the presentation of formally represented transparency information to users through novel, more comprehensible, and potentially adaptive user interfaces, heightening data subjects’ actual informedness about data-related practices and, thus, their sovereignty.Altogether, our transparency information language and toolkit allow – differently from previous work – to express transparency information in line with actual legal requirements and practices of modern (web) information systems engineering and thereby pave the way for a multitude of novel possibilities to heighten transparency and user sovereignty in practice.

Live Q&A Recording

This live session has not been uploaded yet. Check back soon or check out the live session.