Explainable AI and Trust Issues

The Metaculus Journal

Oct 2 2022 • 12 mins

https://www.metaculus.com/notebooks/9613/explainable-ai-and-trust-issues/

AI researchers exploring ways to increase trust in AI recognize that one barrier to trust, often, is a lack of explanation. This recognition has led to the development of the field of Explainable Artificial Intelligence (XAI). In their paper Formalizing Trust in Artificial Intelligence, Jacovi et al. classify an AI system as trustworthy to a contract if it is capable of maintaining this contract: A recommender algorithm might be trusted to make good recommendations, and a classification algorithm might be trusted to classify things appropriately. When a classification algorithm makes grossly inappropriate classifications, we feel betrayed, and the algorithm loses our trust. (Of course, a system may be untrustworthy even as we continue to place trust in it.) This essay explores current legal implementations of XAI as they relate to explanation, trust, and human data subjects (e.g. users of Google or Facebook)—while forecasting outcomes relevant to XAI.

You Might Like

Darknet Diaries
Darknet Diaries
Jack Rhysider
WSJ’s The Future of Everything
WSJ’s The Future of Everything
The Wall Street Journal
System Design
System Design
Wes and Kevin
Paradigm Shift
Paradigm Shift
Microsoft India
Waveform: The MKBHD Podcast
Waveform: The MKBHD Podcast
Vox Media Podcast Network
The Vergecast
The Vergecast
The Verge
The Stack Overflow Podcast
The Stack Overflow Podcast
The Stack Overflow Podcast
Acquired
Acquired
Ben Gilbert and David Rosenthal
TED Tech
TED Tech
TED Tech
Talk Python To Me
Talk Python To Me
Michael Kennedy (@mkennedy)
CyberWire Daily
CyberWire Daily
N2K Networks
Product Management: The Journey  0 - 1 - 100
Product Management: The Journey 0 - 1 - 100
Krishna Ramalingam & Mayank Gelani