Lapo Faggi (University of Florence)
When: Jul 14th, 2021 – 11:00 – 11:45 AM
Where: Google meet link
Description
Traditional machine learning techniques usually assume static input data and the existence of a neat distinction between a training and a test phase. Input data, entirely available at the beginning of the learning procedure, are processed as a whole, iterating over the training dataset multiple times, optimizing the performance with respect to a given learning task. The trained models are then freezed and exploited for inference only, hence computationally expensive re-training procedures are needed to possibly incorporate any new available information. This learning paradigm is clearly incompatible with what humans do in their everyday life, continuously acquiring and adapting their knowledge to the dynamic environment in which they live. The field of machine learning that aims at simulating this learning process by an artificial agent is known as continual or life-long learning. In this seminar, some possible continual learning scenarios will be introduced, together with all the different strategies devised to cope with what is known as catastrophic forgetting. Some practical examples will be also explained in more detail (Elastic weight consolidation, Memory Aware Synapses, Learning without Forgetting, Gradient Episodic Memory).