When: Jan 27, 2021 – 11:00 – 11:45 AM
Where: Google meet link
In the foreseeable future, a fundamental challenge of Artificial Intelligence will be the need to explain, in a human-comprehensible manner, the working of a black-box model. There are currently many different approaches to tackle the explainability problem from the feature importance point of view; the talk provides a review of the mathematical framework for the popular approaches exploited by the scientific community i.e. LIME, SHAP, etc. Then we present a novel framework trying to bridge the gap between data-driven optimization and human high-level knowledge. The approach provides for the inclusion of the human understanding of the relevance of the input feature. The basic idea is to extend the empirical loss with a regularization term depending on the constraints defined by the apriori knowledge on the importance of the features. We provide preliminary experimental results on the fairness topic.