Programme
Programme détaillé
Session 1 (Chair TBD)
Présentation de l'atelier (Maëlle Moranges)
From Explainability to the Explanation of AI: A Situated Perspective Embodied by and for Professions (Ranya Bennani, Myriam Frejus and Marc-Eric Bobillier-Chaumon)
Prise en compte des propriétés FATES en MLOps: perspectives et ambitions (Mireille Blay-Fornarino, Jean-Michel Bruel, Sébastien Mosser and Frédéric Precioso)
Explicabilité de séries temporelles : étude de cas sur des données de sécurité urbaine (Matthieu Delahaye, Lina Fahed, Florent Castagnino and Philippe Lenca)
Réconcilier performance et explicabilité dans la classification du cancer du sein : Une approche basée sur la carte auto-organisatrice (Yasser Idris Dilmi, Mohamed Djallel Dilmi, Faten Chaieb-Chakchouk and Ahmad Tay)
Session 2 (Chair TBD)
MIIC-SR: From Complex Data to Structural Causal Models (Nadir Sella, Arefe Asadi, Myriam Tami et Louis Verny)
Perspectives for Direct Interpretability in Multi-Agent Deep Reinforcement Learning (Yoann Poupart, Aurélie Beynier et Nicolas Maudet)
Vers une couverture partielle d'explications de réseaux neuronaux par des réseaux d'approximation (Mathieu Brassart et Laurent Simon)
Conférencière invitée: Katrien Verbert
Empowering users through interactive and hybrid explanations
Résumé
Despite the long history of work on explanations in the Machine Learning, AI and Recommender Systems literature, current efforts face unprecedented difficulties: contemporary models are more complex and less interpretable than ever. As such models are used in many day-to-day applications, justifying their decisions to end users will only become more crucial. In addition, several researchers have voiced the need for interaction with explanations as a core requirement to support empowerment of users. Such interaction methods can enable users to steer models with input and feedback, and can support better model understanding. In this talk, I will present our work on interactive explanation methods tailored to the needs of end users, such as healthcare professionals and job seekers. In addition, I will present our work on combining data-centric and model-centric explanations to empower end users in refining predictive models. Our work emphasizes interactive, hybrid explanation methods that not only improve model understanding but also enhance the user’s ability to steer and improve AI models using domain knowledge.
Biographie
Katrien Verbert is a professor at the Augment research group at the Departement of Computer Science of KU Leuven. She obtained a doctoral degree in Computer Science in 2008 at KU Leuven, Belgium. She was a postdoctoral researcher of the Research Foundation – Flanders (FWO) at KU Leuven. She was an Assistant Professor at TU Eindhoven, the Netherlands (2013 –2014) and Vrije Universiteit Brussel, Belgium (2014 – 2015). Her research interests include visualisation techniques, recommender systems, explainable AI, and visual analytics. She has been involved in several European and Flemish projects on these topics, including the EU ROLE, STELLAR, STELA, ABLE, LALA, PERSFO, Smart Tags and BigDataGrapes projects. She is also involved in the organisation of several conferences and workshops (program chair IUI 2025, program chair RecSys 2024, general chair IUI 2021, program chair LAK 2020, general chair EC-TEL 2017, program chair EC-TEL 2016, workshop chair EDM 2015, program chair LAK 2013 and program co-chair of the EdRecSys, VISLA and XLA workshop series, DC chair IUI 2017, DC chair LAK 2019).
https://wms.cs.kuleuven.be/cs/onderzoek/augment/katrien-verbert
Session 3 (Chair TBD)
X-Train: eXplanations for Training – Utilisation d'explications pour l’entraînement Multimodal des Transformers (Meghna P Ayyar, Jenny Benois-Pineau and Akka Zemmari)
Attention and Beyond: Explainability Techniques for Vision Transformers (Wadie El Amrani)
Table ronde : slides