Learning From Constraints

When water moves, mollusks open shell and when something touches membrane, then shell closes. Animals are able to coordinate perception with action, which is especially clear during the hunt. In the children cognitive development, the transition from sensorimotor to more abstract representations of reality follow a stage-based principle. Is that the exclusive outcome of biology or is dictated by general principles of optimization, so that stages are nothing else than the natural solution to break the complexity of learning. Most machine learning algorithms relies on simple protocols (supervised, reinforcement, unsupervised, semi-supervised schemes), in which a considerable number of relevant interactions are missed. How about the crucial role of the teacher? The clear identification of hierarchical labels in learning environments can be thought of as a sign of intermediate goals for the agent, and the formalization of some constraints might directly give insights on the birth of deep architectures.

Members of the project

 

Former members

Research Direction and work-packages
  • Learning from constraints in single-task learning:  Focus on single task learning and reformulation of kernel machines within the framework of learning from constraints. Classification and regression is reformulated by expressing the collection of examples as constraints.In addition, the problem in which the targets are attached on infinite sub-sets of the domain is investigated. Here is a more detailed list of topics to be faced:
    1. reformulation of learning from finite collection of examples in the framework of learning from constraints;
    2. constraints in the input domain expressed in infinite subsets (e.g.: (x,y) is any pair of the input and we impose  that the target is “+1” for all x: 0<x<3);
    3. algorithmic issues
  • Learning from convex constraints: Convex constraints in multi-task learning is investigated from a general point of view with emphasis on polytopes.  Here is a detailed list of topics to be faced:
    1. the case of linear constraints A f(x) = b
    2. the case of  f(x)>0
    3. the case of  general polytopes
    4. probabilistic normalization of classifiers
    5. benchmarks based on artificial examples
  • Learning from constraints by quasi-local kernels: Investigation of the role of the kernel in “learning from constraints”. In particular, the emphasis is on the difference between local and global kernels and on how to come up with mixed solutions. Here is a detailed list of topics to be faced:
    1. why are neither local nor global kernels very appropriate (especially) in the framework of learning from constraints?
    2. Mixture of kernels and kernel learning issues
    3. Expansions of local kernels (e.g.: expansion of Gaussian with sigma, 2 sigma, 4 sigma, …)
    4. benchmarks based on artificial examples
  • Bridging logic and kernel machines: How can kernel machines and FOL be bridged in a multi-task environment? Here is a detailed list of topics to be faced:
    1. Learning with constraints by kernel machines
    2. From FOL clauses to real-valued constraints
    3. Enforcing constraints by penalties
    4. Benchmarks based on artificial examples
  • Learning from constraints and active teaching: Study of stage-based learning, which starts from induction and continue with higher level constraint satisfaction. Here is a list of topics to be developed:
    1. Two stage-based learning: perceptual level and abstract level
    2. Order relations from a set of constraints (the “easy” comes first)
    3. Gradient methods
    4. Continuation methods
  • Learning of constraints: Analysis on the development of constraints, which are not necessarily given in advance, but are learned just as other functions (from examples and (other given) constraints. Here is a list of topics to be developed:
    1. General study of the case in which the constraints are just learnable functions;
    2. Learning of constraints once we give a parametric-based representation of constraints
    3. The role of phases in learning constraints
    4. Learning constraints and deep architectures
  • Forcing constraints in multilayer networks: Learning from constraints based on multi-layered networks instead of the variational approach that originates kernel-like machines. Here is a list of topics to be developed:
    1. Algorithmic issues – backprop-like algorithms;
    2. Experiments for problems of coherent classification from multi-view patterns
  • Coherent decision-making
  • Semantic-based regularization applied to COIL 
  • Enforcing linear constraints in portfolio asset allocation
  • Semantic-based regularization applied to Wikipedia document classification

Integrating logic and learning

The integration of deep learning and logic reasoning is still an open-research problem and it is considered to be the key for the development of real intelligent agents. From one side, deep learning obtained amazing results in many applications like computer vision, natural language processing and so on. On the other hand, a real intelligent behavior of an agent acting in a complex environment is likely to require some kind of higher-level symbolic inference.

Click here for more details on this research topic.

Seminars at SAILab, Siena
  • Gabriele Ciravegna, 03 July 2019, A Constraint-based Approach to Learning and Explanation
  • Giuseppe Marra, 12 June 2019, Neural Markov logic networks
  • Francesco Giannini, 13 March 2019, Integrating Learning and Reasoning with Deep Logic Models
  • Alessandro Betti, 05 February 2019, From architectural constraints to neural subsidiary conditions
  • Marco Gori, 09 January 2019, Developmental Learning with Constraints
  • Francesco Giannini, 24 October 2018, Some Approaches to Learning of Logical Constraints
  • Alessandro Betti, 11 October 2018, Perfect Neuron Building
  • Alessandro Betti, 28 June 2018, Learning with Architectural Constraints
  • Francesco Giannini, 21 June 2018, Loss functions generation by means of fuzzy aggregators
  • Giuseppe Marra, 07 June 2018, Probabilistic Soft Logic
  • Fabrizio Riguzzi, 18 April 2018, Deep Probabilistic Logic Programming
  • Francesco Giannini, 29 March 2018, LogSCM – Logical Support Constraint Machines
  • Giuseppe Marra, 15 March 2018, CLARE: a Constrained Learning And Reasoning Environment
  • Francesco Giannini, 23 June 2017, Learning and reasoning under Łukasiewitcz logic
  • Francesco Giannini, 25 May 2017, Support constraints and logical deduction
  • Francesco Giannini, 18 May 2017, The convex Łukasiewicz fragment
  • Marco Gori, 18 April 2011, Constraint verification — In this talk, I discuss how to use the theory for constraint verification. Some links are established with manifold regularization and more general insights are given on how to handle dynamic structures. An example of  model checking in  logic is also shown.
  • Marco Gori, 11 April 2011, Semantic-based regularization — In this talk, I show that a kernel-based solution can also be given in case of quadratic iso-perimentric constraints and of holonomic constraints, whenever an appropriate approximation is adopted that is based on the knowledge of the unsupervised examples.
  • Marco Gori, 4 April 2011, Constraint quantization — In this talk, I give guidelines for approaching any problem of learning from constraints thanks to the quantization of the constraints on the set of unsupervised/supervised data. It is shown that a kernel-based solution exists and that the classic kernel machine mathematical apparatus can be fully re-used.
  • Marco Gori, 28 March 2011, Exact penalties and support domains — In this talk, I discuss the cases in which exact penalties can replace the Lagrangian approach and present the emergence of constraint domains (support vectors as a special case) and show the consequent reduction of the representer theorems.
  • Marco Gori, 21 March 2011, Representer theorems in learning from constraints — In this talk, I present the general representer theorems in both cases of (universal quantifier)-based and (existentially quantifier)-based constraints using the Lagrangian approach. I show some properties of the Lagrange multipliers and discuss  primal/dual solutions, including links with path following primal-dual methods.
  •  Marco Gori, 14 March 2011, An introduction to learning from constraints — In this talk, I give an introduction to learning from constraints by presenting examples in different contexts. Universal and existential constraints are introduced and a general formulation of the learning problem is given within the framework of  variational calculus. Links with learning from examples and classic constraint satisfaction – including logic – are given.
  • Marco Gori, 7 March 2011, Pseudo-differential operators, kernels, boundary conditions, and well-posedness — In this talk, I focus attention on pseudo-differential operators and to their connection with kernels, including their spectral interpretation. I discuss the role of boundary conditions for the existence and uniqueness of the solution and, finally, make some heuristic comment on the choice of the kernel. In particular, the discussion will focus on local vs global kernels and related literature by showing the connection with classic results on self-adjoint operators and boundedness. Finally I give some insights on the relaxation of the regularity assumptions on the solution by showing some interesting examples of optimality.
  • Marco Gori, 28 February 2011, Where do kernel machines come from? Not, yet another model! — In this talk, I discuss the simplest problem the agent is expected to face: learn from a collection of supervised pairs. I prove that the results are strictly related to kernel machines and that, in particular, many significant links can be established with the theory of RKHS.  A Bayesian interpretation of learning is also given.
  • Marco Gori, 21 February 2011, An introduction to constrained variational calculus — In this talk, I introduce the solution to classic variational problems with subsidiary conditions by using the Lagrangian approach. The subject is presented with applicative perspectives to machine learning.

Talks
  • LYRICS: a unified framework for learning and inference with constraints – IDA – Czech Technical University – Prague – January 2019
  • Integrating deep learning and reasoning with First Order Fuzzy Logic – DTAI – KU Leuven – Leuven – September 2018
  • Characterization of the Convex Łukasiewicz Fragment for Learning from Constraints, AAAI2018, New Orleans, USA, January 2018
  • Learning Łukasiewicz Logic Fragments by Quadratic Programming, ECML-PKDD2017, Skopje, Macedonia, September 2017
  • Learning from Logical Constraints by Quadratic Optimization, Fondazione Bruno Kessler FBK, Trento, June 2017
  • Learning from constraints, Academy of Sciences of Czech Republic, Prague June 2012
  • Learning from constraints, WU, Wien, June 2012
  • Learning from constraints, KAIST, Seoul, February 2012
  • Support Constraints machines, SIMBAD2011, September 2011
  • Support constraint machines, Boston Neuro Talks MIT, September 2011
  • Support constraint machines, in “Collective Learning and Inference in Structured Data”, invited talk, ECML2011, Athens, September 2011
  • Learning from constraints (keynote speech) ECML2011, Athens, September 2011
  • Learning from constraints, Naple, June 2011
  • Natural laws of stage-based learning in humans and machines, Naturalization of mind, Siena, May 2011
  • Knowledge-based parsimonious agents, Mind Force, Siena October 2010
  • On the puzzle of induction-deduction: Bridging perception and symbolic reasoning, Katholieke Universiteit Leuven, May 2010
  • On the puzzle of induction-deduction: Bridging perception and symbolic reasoning, Technical Univ. of Auckland, New Zealand, February 2010
  • On the puzzle of induction-deduction: Bridging perception and symbolic reasoning, Univ. of Waikato, New Zealand, February 2010
  • On the puzzle of induction-deduction: Bridging perception and symbolic reasoning, Monash Univ., Melbourne, February 2010
  • On the puzzle of induction-deduction: Bridging perception and symbolic reasoning, Trento, December 2009
  • Semantic-based regularization: insights into deep learning (b), Erice, December 2009
  • Semantic-based regularization: insights into deep learning (a), Erice, December 2009
  • On the puzzle of induction-deduction, Erice, December 2009
  • Semantic-based regularization and Piaget’s cognitive stages, FIRB Intelligent Technologies for Cultural Visits and Mobile Education,Trento, September 2009
  • Semantic-based regularization: insights into deep learning, Ulm, 13 July 2009
  • On the birth of cognitive stages: Beyond sensi-motor based agents in machine learning, Certosa di Pontignano, Rutgers-Joint Workshop on Mind and Culture, Certosa di Pontignano, Siena, June 2009
  • Semantic-based regularization: learning from rules and examples, Modena, April 2009
  • Semantic-based regularization, Genova, February 2009
  • University of Florence, January 2009
  • Semantic-based regularization, UPMC, Paris, December 2008
  • Diffusion learning by prior and acquired links, keynote speech, Varna, September 2008

Publications

Software