Nicola Picchiotti (University of Pavia)
When: Sep 22nd, 2021 – 11:00 – 11:45 AM
Where: Google meet link
Description
The “black box” nature of deep neural network models is often a limit for safe applications since the reliability of the model predictions can be affected by the incompleteness in the optimization problem formalization. To frame and systematize the topic, in the first chapter we define a set of requirements an AI model has to satisfy to be considered reliable and we provide a checklist with practical requisites in the form of specific questions, in order to bridge the gap between abstract requirements, e.g. those reported in regulation and practical problems. In Chapter 2 a more theoretical study is reported, where we describe two novel methodologies in the explainability field, exploiting, in an innovative way, the well-known concept of feature importance. In Chapters 3 and 4 we report the results of the analyses carried out to discover the genetic variability explaining the different degrees of severity of patients affected by COVID-19. The challenge was that, differently for Mendelian disease, where a single variant can be responsible for the disease, the complex genetic diseases such as COVID-19 are characterized by a potentially high number of both rare and common variants contributing together in a cooperative way.