When: Jan 19th, 2022 – 11:00 – 11:45 AM
Where: Google meet link
Learning invariant features from video streams: an optimal control approach
Symmetries, invariances and conservation equations have always been an
invaluable guide in Science to model natural phenomena through simple yet effective relations. For instance, in computer vision, translation equivariance is typically a built-in property of neural architectures that are used to solve visual tasks; networks with computational layers implementing such a property are known as Convolutional Neural Networks (CNNs).
When dealing with video streams, common built-in equivariances are able to handle only a small fraction of the broad spectrum of transformations encoded in the
visual stimulus and, therefore, the corresponding neural architectures have to resort to a huge amount of supervision in order to achieve good generalization capabilities.
During this seminar, I will first introduce the notion of motion invariant features, extending the classical idea of brightness invariance exploited in optical flow estimation. Then, having defined the unsupervised criterion to be fulfilled, I will analyze in detail an innovative approach based on optimal control theory to minimize the chosen functional. Unlike classical online machine learning, this method guarantees the optimality of the solution with respect to the entire time interval. Finally, I will also present a case study – currently under experimental validation – to validate these ideas.