Andrea Panizza (Baker Hughes – General Electric Company)
May 15, 2019 – 11:00 AM
DIISM, Artificial Intelligence laboratory (room 201), Siena SI
The Variational Autoencoder (VAE) is a not-so-new-anymore Latent Variable Model (Kingma & Welling, 2014), which models the pdf of the inputs, and it is thus able to generate new samples from the same distribution (a generative model). VAEs can have various applications, mostly related to data generation (for example, image generation, sound generation, text generation and missing data imputation). In this presentation, we will see why the problem of estimating the input pdf is intractable and how to use Variational Inference to make it tractable, introducing the concept of the Evidence Lower Bound (ELBO). We will then introduce an algorithm to accurately and efficiently estimate the ELBO for large data set, the Auto-Encoding Variational Bayes (AEVB) algorithm, and finally we will describe one possible instantiation of said algorithm, the Variational Autoencoder.