Gabriele Ciravegna (DIISM, University of Siena)
Jul 3, 2019 – 11:00 AM
DIISM, Artificial Intelligence laboratory (room 201), Siena SI
In the last few years we have seen a remarkable progress from the cultivation of the idea of expressing the interactions of intelligent agents with the environment by the mathematical notion of constraint. However, the progress has mostly involved the process of providing consistent solutions with a given set of constraints, whereas learning ‘new’ constraints to adapt to the environment is still an open challenge. In this paper we propose a novel approach to learning of constraints which is based on information theoretic principles. The basic idea consists in maximizing the transfer of information between the task functions and a set of learnable constraints, implemented using neural networks subject to L1 regularization. This process leads to the unsupervised development of new constraints that are fulfilled in different sub-portions of the input domain.
In addition, we define a simple procedure that can explain the behaviour of the newly devised constraints in terms of First-Order Logic formulas, thus extracting novel knowledge on the relationships between the original tasks. An experimental evaluation is provided to support the proposed approach, in which we also explore the regularization effects introduced by the proposed information-based index.