Can AI explanations skew our causal intuitions about the world? If so, can we correct for that?

From International Center for Computational Logic

Can AI explanations skew our causal intuitions about the world? If so, can we correct for that?

Talk by Marko Tesic
Abstract: Explainable Artificial Intelligence provides methods for bringing in transparency into black-box artificial intelligence (AI) systems. These methods produce explanations of AI systems’ predictions that are aimed at increasing the understanding of the AI systems’ behavior and help us to appropriately calibrate our trust in these systems. In this talk, I will explore some of the potential undesirable effects of providing explanations of AI systems to human users and ways to mitigate such effects. I start from the observation that most AI systems capture correlations and associations in data and not causal relationships. Explanations of the AI systems’ predictions would make the correlations more transparent. They would not, however, make the explained relationships causal. In four experiments, I show how providing counterfactual explanations of AI systems’ predictions unjustifiably changes people’s beliefs about causal relationships in the real world. I also show how we may go about preventing such a change in beliefs and hope to open doors for further exploration into psychological effects of AI explanations on human recipients.

The talk will take place virtually through the link:

https://bbb.tu-dresden.de/b/pio-zwt-smp-aus