Friendly Training

** We are excited to announce that our paper has been accepted at AAAI2022! **

Friendly Training is a new approach for improving the performance of neural classifiers, with an interesting perspective into learning from a cognitive point of view. It consists in altering the input data by adding an automatically estimated perturbation, with the goal of facilitating the learning process of a neural classifier.

This strategy echoes the pedagogical approach formulated by the psychologist Lev Vygotsky, according to which children learn by a progressive reduction of the so-called Zone of Proximal Development, i.e., the space between what the learner can and cannot still do autonomously, which contains tasks that the learner can accomplish if appropriately guided.

The key intuition behind this training strategy, that we named Friendly Training (FT), is that instead of exposing the network to an uncontrolled variety of data with heterogeneous properties over the input space, the learning can rather be guided by the information that the network has learnt to process so far.  The aim of such technique is to operate on noisy data, outliers, and, more generally, on whatever falls into the areas of the input space that the network finds hard to handle at a certain stage of the learning process. Such data are modified to mitigate the impact of the information that is inconsistent with what has been learnt so far.

Over time, we have devised two possible instances of this training method for neural networks, the former is based on direct gradient-based optimization while the latter is based on an auxiliary network.

#2 (new): Being Friends Instead of Adversaries: Deep Networks Learn from Data Simplified by Other Networks

[…] In this work we revisit and extend the idea in #1, introducing a radically different and novel approach inspired by the effectiveness of neural generators in the context of Adversarial Machine Learning. We propose an auxiliary multi-layer network that is responsible of altering the input data to make them easier to be handled by the classifier at the current stage of the training procedure.
The auxiliary network is trained jointly with the neural classifier, thus intrinsically increasing the depth of the classifier, and it is expected to spot general regularities in the data alteration process.

The effect of the auxiliary network is progressively reduced up to the end of training, when it is fully dropped and the classifier is deployed for applications. We refer to this approach as Neural Friendly Training.

An extended experimental procedure involving several datasets and different neural architectures shows that Neural Friendly Training overcomes the originally proposed Friendly Training technique, improving the generalization of the classifier, especially in the case of noisy data.

Sketch of Friendly Training architecture: the impact of the auxiliary model is progressively reduced, improving early training but progressively turning into an autoencoder (it can be dropped at the final stage).

@misc{marullo2022friendly,
      title={Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier}, 
      author={Simone Marullo and Matteo Tiezzi and Marco Gori and Stefano Melacci},
      year={2021},
      eprint={2112.09968},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      booktitle={Thirty-Sixth AAAI Conference on Artificial Intelligence}
}

#1: Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier

When focussing on the way in which the training data are provided to the learning machine, we can distinguish between the classic random selection of stochastic gradient-based optimization and more involved techniques that devise curricula to organize data, and progressively increase the complexity of the training set. In this paper, we propose a novel training procedure named Friendly Training that involves altering the training examples in order to help the model to better fulfil its learning criterion. The model is allowed to simplify those examples that are too hard to be classified at a certain stage of the training procedure. The data transformation is controlled by a developmental plan that progressively reduces its impact during training, until it completely vanishes. In a sense, this is the opposite of what is commonly done in Adversarial Training. Experiments on multiple datasets are provided, showing that Friendly Training yields improvements, especially in deep convolutional architectures. Adapting the input data is a feasible way to stabilize learning and improve the generalization skills of the network.

The original article can be found here.

@misc{marullo2021friendly,
      title={Friendly Training: Neural Networks Can Adapt Data To Make Learning Easier}, 
      author={Simone Marullo and Matteo Tiezzi and Marco Gori and Stefano Melacci},
      year={2021},
      eprint={2106.10974},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      booktitle={2021 International Joint Conference on Neural Networks (IJCNN)}
}