Conference Content

Learn more about who, what, when, and where.
2 of 4
Next

In & Out of the Hub

This is your portal to access more content in the Virtual Hub.
1 of 4
Next

Live Content

On these streams you will find live content happening during the conference.
4 of 4
Close

Connect with others

These spaces are built specifically for you to connect with others at the conference.
3 of 4
Next
← Back to Agenda
Tutorial
March
4
16:00
-
17:30
UTC
Join Meeting
Add to Calendar 03/04/2021 04:00 PM 03/04/2021 05:30 PM UTC Responsible AI in Industry: Lessons Learned in Practice Check out this session on the FAccT Hub. https://2021/facctconference.org/conference-agenda/responsible-ai-in-industry-lessoons-learned-in-practice
Practice Track

Responsible AI in Industry: Lessons Learned in Practice

Krishnaram Kenthapadi, Ben Packer, Mehrnoosh Sameki, Nashlie Sephus
Join the Conversation
Checkout Our Tutorial Page

Abstract

Artificial Intelligence (AI) is increasingly playing an integral role in determining our day-to-day experiences. Increasingly, the applications of AI are no longer limited to search and recommendation systems, such as web search and movie and product recommendations, but AI is also being used in decisions and processes that are critical for individuals, businesses, and society. With web-based AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. With many factors playing a role in development and deployment of AI systems, they can exhibit different, and sometimes harmful, behaviors. For example, the training data often comes from society and real world, and thus it may reflect the society’s biases and discrimination toward minorities and disadvantaged groups. For instance, minorities are known to face higher arrest rates for similar behaviors as the majority population, so building an AI system without compensating for this is likely to only exacerbate this prejudice.These concerns highlight the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society. In this tutorial, we will first motivate the need for responsible AI, highlighting model explainability, fairness, and privacy in AI, from societal, legal, customer/end-user, and model developer perspectives. Then, we will focus on the real-world application of such areas and tools, present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, hiring, computer vision, cognition tasks including machine translation, lending, and analytics.

Recorded Live Session

This live session has not been uploaded yet. Check back soon or check out the live session.