XCBR: Case-Based Reasoning for the Explanation of intelligent systems

After the success of the first workshop on XBR at ICCBR 2018 (http://gaia.fdi.ucm.es/events/xcbr/) we would like to propose the second workshop for ICCBR 2019. The success of the intelligent systems has led to an explosion of the generation of new autonomous systems with new capabilities like perception, reasoning, decision support and self-actioning. Despite the tremendous benefits of these systems, they work as black-box systems and their effectivness is limited by their inability to explain their decisions and actions to human users. The problem of explainability in Artificial Intelligence is nto new but the rise of the autonomous intelligent systems has created the necessity to understand how these intelligent systems achieve a solution, make a prediction, or a recommendation or reason to support a decision in order to increase the users’ reliability in these systems. Additionally, the European Union included in their regulation about the protection of natural persons with regard to the processing of personal data a new directive about the need of explanations to ensure fair and transparent processing in automated decision-making systems.
The goal of Explainable Artificial Intelligence (XAI) is “to create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of Artificial Intelligence (AI) systems.
The aim of the XCBR workshop is to provide a forum for the discussion of trends, research issues, and practical experiences in the use of Case-based Reasoning (CBR) methods for the inclusion of explanations to several AI techniques using reasoning-by-example. CBR systems have previous experience in interactive explanations and in exploiting memory-based techniques to generate these explanations.

Research contributions submitted to the workshop will be related to areas that include, but are not limited to, the following:

  • AI explanation methods based on CBR
  • Visualization of case-based explanations
  • Case-based explanation of learning techniques
  • Case-based explanation of planning
  • Case-based explanation of decision-making techniques
  • Case-based explanation of the massive data obtained from sensor systems, Internet of Things, or wearables
  • Combination of existing AI models and CBR to provide explanation capabilities
  • Application of Case-based explanation capabilities to different domains
  • Lessons learned in XCBR investigations
  • Challenge tasks for XCBR systems in novel AI techniques
  • Measures for assessing case-based explanations
  • User interaction for explanations

Belén Díaz Agudo
Department of Software Engineering and Artificial Intelligence
School of Computing
University Complutense of Madrid
28040 Madrid

Juan A. Recio García
Department of Software Engineering and Artificial Intelligence
School of Computing
University Complutense of Madrid
28040 Madrid

Ian Watson
Department of Computer Science
University of Auckland
New Zealand