Our Publications
Explore our collection of white papers, research articles, and reports that highlight the progress and findings of the FIN-X project in explainable AI for the financial sector.
Identifying XAI User Needs: Gaps between Literature and Use Cases in the Financial Sector
Academic PaperAuthor(s): Jenia Kim, Henry Maathuis, C.A.G.M van Montfort, Danielle Sent
Published in: Proceedings of the 2nd Workshop on Responsible Applied Artificial Intelligence
Abstract: One aspect of a responsible application of AI is ensuring that the operation and outputs of an AI system are understandable for non-technical users, who need to consider its recommendations in their decision making. The importance of explainable AI (XAI) is widely acknowledged; however, its practical implementation is not straightforward. In particular, it is still unclear what are the requirements of non-technical users from explanations, i.e. what makes an explanation meaningful. In this paper, we synthesize insights about meaningful explanations from a literature study and two use cases in the financial sector. We identify 30 components of meaningfulness in the XAI literature. In addition, we report three aspects that were central to the users in our use cases, but are not prominent in the literature: actionability, coherent narratives and context. Our results highlight the importance of narrowing the gap between theoretical and applied Responsible AI.
Human-Centered Evaluation of Explainable AI Applications: a Systematic Review
Academic PaperAuthor(s): Jenia Kim, Henry Maathuis, Danielle Sent
Published in: Frontiers in Artificial Intelligence
Abstract: Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there's been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user's perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human-AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability..
Checklist for Evaluating the Meaningfulness of AI System Explanations for Internal Users
White PaperAuthor(s): FIN-X Research Team
Published in: AGConnect, Hogeschool Utrecht
Summary: Provides a practical checklist for evaluating Explainable AI systems with internal users. Publication link will follow soon.
Download English Version (PDF)
Download Nederlandse Versie (PDF)
Guidelines for User Interface Design of Explainable AI
White PaperAuthor(s): FIN-X Research Team
Published in: Hogeschool Utrecht
Summary: This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal re quirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI re searchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sen sitive decisions. On the other hand, organizations are making better and better models, for ex ample, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elab orate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the ad vice can also be used in other sectors.
Download English Version (PDF)
Download Nederlandse Versie (PDF)