[Mar 31th 2021] LabMeeting: Logic Explainer Networks

Gabriele Ciravegna (University of Siena)

When: Mar 31, 2021 – 11:00 – 11:45 AM
Where: Google meet link
Description

Despite the increasing popularity of deep learning application, in real wold domains the employment of deep neural networks is still limited and strongly criticized due to their lack of interpretability. In particular, providing a comprehensible explanation for a certain classification task can be crucial in decision support contexts. To this aim, rule-based explanations expressed in a high-level language are particularly relevant. In this paper we consider a general framework for XAI to show how a mindful design may lead to interpretable deep learning models called Logic Explainer Networks (LENs), which only require the input and the output of the model to be semantically meaningful predicates. LENs are capable to provide both local and global explanations of the predictions made by the LEN itself. Explanations are given as first-order-logic formulas expressed in terms of input concepts. As a special case, we describe and discuss three out-of-the-box LEN-based models. Experimental results confirm how on average LENs not only generalize better and but also provide more meaningful explanations than established white-box models such as decision trees and Bayesian rule lists.

 |  Category: Seminars