Our Approach
The FIN-X project followed a design science approach, focusing on creating practical tools and guidelines to make AI outcomes explainable and trustworthy for internal users in the financial sector. This approach emphasizes collaboration with industry partners to ensure that solutions meet real-world needs and are grounded in current knowledge and practice.
Design Science Methodology
Design science research is a problem-solving paradigm that involves iterative development and refinement of artifacts (tools, guidelines, and frameworks) aimed at addressing specific practical issues. In the FIN-X project, these artifacts are designed to help internal users, such as risk underwriters and claims handlers, understand and explain AI-driven decisions to customers effectively and confidently.
Key Phases of Our Approach
- Needs Assessment: Engaging with internal users and industry experts to understand their challenges and requirements for explainability in AI systems.
- Knowledge Integration: Reviewing existing research, guidelines, and best practices in explainable AI to integrate well-established knowledge into our approach.
- Artifact Development: Creating tools and instruments designed to translate complex AI outputs into comprehensible explanations for internal users.
- Prototyping and Iteration: Developing prototypes of explainability tools and refining them based on user feedback from real-world testing scenarios.
- Evaluation: Assessing the effectiveness of these tools through qualitative and quantitative studies to ensure they meet the needs of internal users and help foster trust in AI decisions.
Project Outcomes
The ultimate goal of the FIN-X project was to produce practical, human-centered tools that help internal users communicate AI-driven decisions transparently. This includes the development of prototypes and publications tailored to the specific needs of the financial sector, aiming to improve customer trust in AI-powered processes like claim assessments and loan approvals.
Continuous Collaboration
Throughout the project, our team collaborated with academic, industry, and regulatory partners to ensure our approach remains aligned with the latest advancements and regulatory standards in AI explainability. This collaboration supports an inclusive and practical development process, where tools are iteratively refined based on feedback from all stakeholders.