Conference Content

Learn more about who, what, when, and where.
2 of 4
Next

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4
Next

Live Content

On these streams you will find live content happening during the conference.
4 of 4
Close

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
Next
← Back to Agenda
Paper
Paper
Paper
March
8
14:00
-
15:45
UTC
Add to Calendar 3/8/21 14:00 3/8/21 15:45 UTC Paper Session 4 Check out this session on the FAccT Hub. https://2021.facctconference.org/conference-agenda/session-4-5
Track Two

Paper Session 4

Session Chair:
Moderator:
Discussant:
Sanmi Koyejo
No items found.

Abstract

Ask a Question?
Join the Conversation

Fifty Shades of Grey: In Praise of a Nuanced Approach Towards Trustworthy Design

Lauren Thornton, Bran Knowles, Gordon Blair
View Paper

Abstract

Environmental data science is uniquely placed to respond to essentially complex and fantastically worthy challenges related to arresting planetary destruction. Trust is needed for facilitating collaboration between scientists who may share datasets and algorithms, and for crafting appropriate science-based policies. Achieving this trust is particularly challenging because of the numerous complexities, multi-scale variables, interdependencies and multi-level uncertainties inherent in environmental data science. Virtual Labs—easily accessible online environments provisioning access to datasets, analysis and visualisations—are socio-technical systems which, if carefully designed, might address these challenges and promote trust in a variety of ways. In addition to various system properties that can be utilised in support of effective collaboration, certain features which are commonly seen to benefit trust—transparency and provenance in particular—appear applicable to promoting trust in and through Virtual Labs. Attempting to realise these features in their design reveals, however, that their implementation is more nuanced and complex than it would appear. Using the lens of affordances, we argue for the need to carefully articulate these features, with consideration of multiple stakeholder needs on balance, so that these Virtual Labs do in fact promote trust. We argue that these features not be conceived as widgets that can be imported into a given context to promote trust; rather, whether they promote trust is a function of how systematically designers consider various (potentially conflicting) stakeholder trust needs.

The Sanction of Authority: Promoting Public Trust in AI

Bran Knowles, John T. Richards
View Paper

Abstract

Trusted AI literature to date has focused on the trust needs of users who knowingly interact with discrete AIs. Conspicuously absent from the literature is a rigorous treatment of public trust in AI. We argue that public distrust of AI originates from the under-development of a regulatory ecosystem that would guarantee the trustworthiness of the AIs that pervade society. Drawing from structuration theory and literature on institutional trust, we offer a model of public trust in AI that differs starkly from models driving Trusted AI efforts. This model provides a theoretical scaffolding for Trusted AI research which underscores the need to develop nothing less than a comprehensive and visibly functioning regulatory ecosystem. We elaborate the pivotal role of externally auditable AI documentation within this model and the work to be done to ensure it is effective, and outline a number of actions that would promote public trust in AI. We discuss how existing efforts to develop AI documentation within organizations---both to inform potential adopters of AI components and support the deliberations of risk and ethics review boards---is necessary but insufficient assurance of the trustworthiness of AI. We argue that being accountable to the public in ways that earn their trust, through elaborating rules for AI and developing resources for enforcing these rules, is what will ultimately make AI trustworthy enough to be woven into the fabric of our society.

Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI

Alon Jacovi, Ana Marasović, Tim Miller, Yoav Goldberg
View Paper

Abstract

Trust is a central component of the interaction between people and AI, in that 'incorrect' levels of trust may cause misuse, abuse or disuse of the technology. But what, precisely, is the nature of trust in AI? What are the prerequisites and goals of the cognitive mechanism of trust, and how can we promote them, or assess whether they are being satisfied in a given interaction? This work aims to answer these questions. We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people). This model rests on two key properties of the vulnerability of the user and the ability to anticipate the impact of the AI model's decisions. We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold, and a formalization of 'trustworthiness' (which detaches from the notion of trustworthiness in sociology), and with it concepts of 'warranted' and 'unwarranted' trust. We then present the possible causes of warranted trust as intrinsic reasoning and extrinsic behavior, and discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted. Finally, we elucidate the connection between trust and XAI using our formalization.

Live Q&A Recording

This live session has not been uploaded yet. Check back soon or check out the live session.