ACDL Satellite Workshop on Graph Neural Networks

The satellite ACDL workshop on Graph Neural Networks (GNNs) was held in SAILab on the 22nd of July.

Program

Morning Session
09:00: Introduction – Marco Gori
09:15: GNNs for heterogeneous information – Franco Scarselli
09:45: Graph networks for learning about complex systems – Peter Battaglia
10:15: A Deep Learning based Community Detection approach – Giancarlo Sperli
10:45: – Coffee Break
11:00: Unsupervised Learning with Graph Neural Networks – Petar Velickovic
11:30: Big graphs and their applications – Pasquale Foggia
12:00: Graph Neural Networks, A constraint-based formulation – Matteo Tiezzi
12:30: Lunch Break

Afternoon Session
14:30 – Logic Tensor Networks – Artur d’Avila Garcez
14:30 – 17:00 Discussions

Pictures

Speakers, Abstracts and Slides

Franco Scarselli – University of Siena

Title:
GNNs for heterogeneous information
Abstract:
The talk will present some activities on Graph Neural Networks (GNNs) currently carried out at Sailab. In modern applications the information is often complex collected from different sources and several problems are contemporaneously solved. In this context, it may be useful to mix GNNs with different characteristics. The talk will discuss some studies whose aim is to understand the properties of two possible mixing: inductive–trasductive GNNs, namely GNNs using both transductive and inductive learning; typed GNNs, namely GNNs using different set of parameters for different types of edges.
Slides

Peter Battaglia – Google DeepMind

Title:
Graph networks for learning about complex systems

Abstract:
This talk is about a general formulation of graph neural networks we’ve developed called “graph networks”. I’ll show how we use this formulation for predicting, inferring, and controlling a variety of complex systems. The specific results will include on learning physical reasoning, visual scene understanding, robotic control, multi-agent behavior, and physical construction.

Slides

Giancarlo Sperlì – CINI – ITEM National Lab

Title:
A Deep Learning based Community Detection approach

Abstract:
Community Detection in On Line Social Networks is a classic feature in networked systems, from the fields of biology, economics, politics and computer science, as well. I will introduce a novel Community Detection method based on a deep learning approach, facing the challenging problems related to the dimensions of the involved data structures, and proposing a novel convolutional technique particularly useful for sparse matrices.

Slides

Petar Veličković – Google DeepMind

Title:
Unsupervised Learning with Graph Neural Networks

Abstract:
In the most general (and common) of cases, a graph encountered in the wild will not be expected to have ground-truth labels, and the task to perform on it may not even necessarily be known at “training” time. Furthermore, very often an underlying graph of relations or interactions will exist between the components of an input, but explicitly specifying this graph may be a highly challenging endeavour, sometimes without clear ground-truth cues. However, recent successes of generalising neural networks to operate on graph-structured inputs almost exclusively rely on a (semi-)supervised learning setup, with an explicitly provided graph structure. As such, the majority of existing graph neural network methodologies are not directly applicable to many real-world use cases of graph data analysis. In this talk, I will provide an overview of three significant methodologies that fall outside of this setup: VGAE [1], DGI [2] and NRI [3], and discuss their merits and shortcomings. Being representative of **unsupervised learning with graph neural networks**, they directly tackle the issues outlined above. I will also provide a personal assessment of where to expect further significant developments in this very exciting area.
[1] Kipf and Welling. “Variational Graph Auto-Encoders”; NIPS BDL 2016
[2] Veličković, et al. “Deep Graph Infomax”; ICLR 2019
[3] Kipf, Fetaya, et al. “Neural Relational Inference for Interacting
Systems”; ICML 2018

Slides

Pasquale Foggia – UNISA

Title:
Large Graph and their applications

Abstract:

 

Slides

Artur d’Avila Garcez – City, University of London

Title:
Logic Tensor Networks

Abstract:
Neural-symbolic computing has sought to benefit from the integration of symbolic AI and neural computation for many years. In a neural-symbolic system, neural networks offer the machinery for efficient learning and computation while symbolic knowledge representation and reasoning offer an ability to benefit from prior knowledge, transfer learning and extrapolation, and to produce explainable neural models. Neural-symbolic computing has found application in many areas including software specification evolution, training and assessment in simulators, and the prediction and explanation of harm in gambling. I will introduce the principles of neural-symbolic computing and will exemplify its use with logic programming, defeasible and nonmonotonic knowledge, with a specific emphasis on first-order languages and relational learning, including connectionist inductive logic programming and the combination of deep networks and full first-order logic using Logic Tensor Networks.

Slides

Matteo Tiezzi – P.h.D Student, SAILab, University of Siena

Title:
Graph Neural Networks – A constraint-based formulation

Abstract:
GNNs exploit a set of state variables, each assigned to a graph node, and a diffusion mechanism of the states among neighbor nodes, to implement an iterative procedure to compute the fixed point of the (learnable) state transition function. We propose a novel approach to the state computation and the learning algorithm for GNNs, based on a constraint optimization task solved in the Lagrangian framework. The state convergence procedure is implicitly expressed by the constraint satisfaction mechanism and does not require a separate iterative phase for each epoch of the learning procedure. In fact, the computational structure is based on the search for saddle points of the Lagrangian in the adjoint space composed of weights, neural outputs (node states), and Lagrange multipliers.

Slides

 |  Category: Events