Vincenzo Laveglia (DIISM, University of Siena)
March 8, 2018 – 9:30 AM
DIISM, Artificial Intelligence laboratory (room 201), Siena SI
Target propagation in deep neural networks aims at improving the learning process by determining target outputs for the hidden layers of the network. To date, this has been accomplished relying on autoassociative networks applied top-to-bottom in order to synthesize targets at any given layer from the targets available at the adjacent upper layer. We proposes a different, error-driven approach, where a regular feed-forward neural net is trained to estimate the relation between the targets at layer i and those at layer i − 1 given the error observed at layer i. The resulting algorithm is then combined with a pre-training phase based on backpropagation, realizing a proficuous “refinement” strategy. Results on the MNIST database validate the feasibility of the approach.