Andrea Zugarini (DIISM, University of Siena)
Jun 14, 2018 – 9:30 AM
DIISM, Artificial Intelligence laboratory (room 201), Siena SI
Word and context embeddings have been of significant help in achieving state-of-the-art results in different Natural Language Processing (NLP) tasks. The success of these representations comes from the learning process, tipycally accomplished in unsupervised large corpora, that naturally develops features transferrable to different tasks. One of the main limitations of word embeddings is the necessity of a fixed-size word vocabulary. In this seminar we present a neural model that, processing words as sequences of characters, jointly learns word and context embeddings. Such solution overcomes the drawbacks that occurs in dictionary-based models, moreover it allows to spot morphological regularities in words. Results in different NLP tasks prove the effectiveness of such embeddings.