The Graph Neural Network (GNN) [SGT+09b] is a connectionist model particularly suited for problems whose domain can be represented by a set of patterns and relationships between them.
In those problems, a prediction about a given pattern can be carried out exploiting all the related information, which includes the pattern features, the pattern relationships and, in general, the whole graph that represents the domain. GNN peculiarity consists in its capability of computing the output prediction processing directly the input domain graph, without any preprocessing into a vectorial representation.
GNNs have been proved to be a universal approximator for a class of functions on graphs and have been applied to several tasks, including spam detection, object localization in images, molecule classification.
The GNN was originally implemented in MATLAB but nowadays frameworks such as Tensorflow are more popular in the machine learning community.
The proposed implementation is modular and, in particular, consists of two components: a core module implementing the GNN model, and a module for the deﬁnition of the loss function, the metric function, and the sub–networks that are applied at node level in the GNN computation.
This latter module can be customized by the user, in order to address speciﬁc tasks and to implement extensions to the basic model.
The proposed implementation of GNNs, based on tensor algebra, is particularly efﬁcient and easily parallelizable on modern multi-CPU and multi–GPU hardware architectures.
The original MATLAB version was designed and written by Franco Scarselli and Gabriele Monfardini in 2011.
The Tensorflow porting was written by Alberto Rossi and Matteo Tiezzi in January 2018.
|Date:||Jun 11, 2019|