Integrating logic and learning

The integration of deep learning and logic reasoning is still an open-research problem and it is considered to be the key for the development of real intelligent agents. From one side, deep learning obtained amazing results in many fields of artificial intelligence like computer vision, natural language processing and so on. On the other hand, a real intelligent behavior of an agent acting in a complex environment is likely to require some kind of higher-level symbolic inference.

Research directions

First-Order Logic (FOL) formulas have been shown to suitably express the available knowledge to define a certain learning problem, in particular in multi-task classification problems where a set of unknown (task) functions have to be learnt. In this framework, the logical formulas may be converted into differentiable functions by means of a chosen t-norm fuzzy logic. The task functions can be considered as logical predicates and are generally implemented as (deep) multi-layer perceptrons. This allows us from one side to exploit state-of-the-art deep architectures and on the other hand to embed interpretable symbolic relations among the task functions in the optimization problem.

In the following are reported the main topics that are still under investigation, together with some references from the key publication list.

 

Talks
  • LYRICS: a unified framework for learning and inference with constraints – IDA – Czech Technical University – Prague – January 2019
  • Integrating deep learning and reasoning with First Order Fuzzy Logic – DTAI – KU Leuven – Leuven – September 2018
  • Characterization of the Convex Łukasiewicz Fragment for Learning from Constraints, AAAI2018, New Orleans, USA, January 2018
  • Learning Łukasiewicz Logic Fragments by Quadratic Programming, ECML-PKDD2017, Skopje, Macedonia, September 2017
  • Learning from Logical Constraints by Quadratic Optimization, Fondazione Bruno Kessler FBK, Trento, June 2017

 

Key Publications