Francesco Bodria (Scuola Normale Superiore)
When: Oct 27th, 2021 – 11:00 – 11:45 AM
Where: Google meet link
Description
Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why.
This is problematic not only for the lack of transparency but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions.
The future of AI lies in enabling people to collaborate with machines to solve complex problems.
Like any efficient collaboration, this requires good communication, trust, clarity, and understanding.
Explainable AI addresses such challenges and for years different AI communities have studied such topics, leading to different definitions, evaluation protocols, motivations, and results.
This lecture provides a reasoned introduction to the work of Explainable AI (XAI) to date and explains the challenges and current achievements of the ERC project “XAI: Science and technology for the eXplanation of AI decision making”
It will motivate the needs of XAI in a real-world and large-scale application while presenting state-of-the-art techniques and best practices, as well as discussing the many open challenges.
Speaker bio
Francesco Bodria was born in 1995 in Parma, Italy. He graduated in Physics (bachelor’s degree) at the University of Parma and in Physics of Complex Systems (master’s degree) at the University of Turin in 2019 with a thesis on Explainability Methods for Natural Language Processing: Applications in Sentiment Analysis.
Currently, he is a Ph.D. student working on eXplainable AI algorithms with the group of ERC XAI led by Fosca Giannotti