Can A.I. Provably Explain Itself? A gentle Introduction to Description Logics
Aus International Center for Computational Logic
Can A.I. Provably Explain Itself? A gentle Introduction to Description Logics
Vortrag von Alisa Kovtunova
- Veranstaltungsort: APB 3027
- Beginn: 14. November 2019 um 13:00
- Ende: 14. November 2019 um 14:30
- Event series: KBS Seminar
- iCal
The emergence of intelligent systems in self-driving cars, planes, medical diagnosis, insurance and financial services among others has shown that when decisions are taken or suggested by automated systems it is essential that an explanation can be provided. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has given a new push for transparency in A.I. However, the inner workings of machine learning algorithms remain difficult to understand, and the methods of making these models explainable still require expensive human evaluation.
On the other hand, knowledge representation based on description logics allow for providing description of the environment, specifying constraints for the system states and detecting inconsistencies, as well as operating information from heterogeneous (possibly incomplete) data sources, and reasoning about the knowledge of an application domain. Because of the conceptual difference with machine learning algorithms, the description logics formalism is much more similar to human reasoning and can be adapted to supply the user with necessary explications for a decision made. Additionally, in order to model dynamic systems, description logics have extensions enabling additionally temporal and probabilistic reasoning.