Title: Model-Based Reasoning for Explainable AI as a Service
As AI systems are increasingly being adopted into application solutions, the challenge of providing explanations and supporting interaction with humans is becoming crucial. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but also it is a necessary step in the process of building trust as humans migrate greater responsibility to such systems.
In this talk we discuss progress made in Explainable Planning, particularly in scenarios involving human-AI teaming, and we present recent advances in using model-based reasoning for designing Explainable AI as a Service.
Daniele Magazzeni is Associate Professor in the Department of Informatics at King’s College London, where he leads the Trusted Autonomous Systems hub and he is Co-Director of the Centre for Doctoral Training on Safe and Trusted AI. Dan’s research interests are in Safe, Trusted and Explainable AI, with a particular focus on AI planning for robotics and autonomous systems, and human-AI teaming. Dan is the President-Elect of the ICAPS Executive Council. He was Conference Chair of ICAPS 2016 and Workshop Chair of IJCAI 2017. He is Co-Chair of the IJCAI-19 Workshop on XAI, and Co-chair of the ICAPS-19 Workshop on Explainable Planning.
Title: Some Shades of Grey! – Interpretability and explanatory capacity of deep neural networks
Based on the availability of data and corresponding computing capacity, more and more cognitive tasks can be transferred to computers, which independently learn to improve our understanding, increase our problem-solving capacity or simply help us to remember connections. Deep neural networks in particular clearly outperform traditional AI methods and thus find more and more areas of application where they are involved in decision-making or even make decisions independently. For many areas, such as autonomous driving or credit allocation, the use of such networks is extremely critical and risky due to their “black box” character, since it is difficult to interpret how or why the models come to certain results. The paper discusses and presents various approaches that attempt to understand and explain decision-making in deep neural networks.
Andreas is a member of the Management Board of the German Research Center for Artificial Intelligence (DFKI) in Kaiserslautern and Scientific Director of the Smart Data & Knowledge Services research area at DFKI. Since 1993 he has also been chair of the department of Knowledge-Based Systems at the Department of Computer Science at the TU Kaiserslautern. In recent years, Andreas has founded and initiated around a dozen AI start-up companies. He is also a Fellow of the International Association for Pattern Recognition (IAPR) and Chairman of the Flexible Factory Partner Alliance (FFPA). He advises academic institutions, research programs and ministries in Germany and abroad. During his career, he has received many awards, including a Distinguished Honorary Professorship (tokubetu eiyo kyoju) at Osaka Prefecture University, an honor bestowed on only five scientists in 135 years. Before joining DFKI and TU Kaiserslautern, he worked at IBM, Siemens Research, Xerox Parc and the University of Stuttgart.
Belén Díaz Agudo
Title: Mapping the challenges and opportunities of CBR for eXplainable AI
The problem of explainability in Artificial Intelligence is not new but the rise of the autonomous intelligent systems has increased the necessity to understand how an intelligent system achieves its solution, makes a prediction or a recommendation or reasons to support a decision in order to increase transparency and user’s trust in these systems. The CBR research community has a great opportunity to provide general methods of self-understanding and introspection on other AI systems, not necessarily case-based. CBR provides a methodology to reuse experiences in interactive explanations and can exploit memory-based techniques to generate explanations to different AI techniques and domains of applications. This talk will review the state of art of XCBR, the synergies with the XAI community, and will give the opportunity to review the underlying issues like confidence, transparency, justification, interfaces, personalization and evaluation of explanations. It will include a review of the lessons learnt at the XCBR workshop and the challenges and promising research lines for CBR research related to the explanation of intelligent systems.
Belén Díaz Agudo is a Professor of Computer Science at the Complutense University of Madrid, Spain where she obtained her PhD in Computer Science in 2002. She developed her teaching and research in the Department of Software Engineering and Artificial Intelligence of the UCM. Her research in the Group for Artificial Intelligence Applications (GAIA) focuses on case-based reasoning (CBR), recommender systems and explanations for intelligent systems. More information on publications and projects at: http://gaia.fdi.ucm.es/people/belen/