Contextual generation of word embeddings for out of vocabulary words in downstream tasks

Published in Canadian Conference on Artificial Intelligence, 2019

Recommended citation: Garneau, Nicolas, Jean-Samuel Leboeuf, and Luc Lamontagne. "Contextual generation of word embeddings for out of vocabulary words in downstream tasks." CanadianAI 2019 (2019) https://link.springer.com/chapter/10.1007/978-3-030-18305-9_60

Over the past few years, the use of pre-trained word embeddings to solve natural language processing tasks has considerably improved performances on every end. However, even though these embeddings are trained on gigantic corpora, the vocabulary is fixed and thus numerous out of vocabulary words appear in specific downstream tasks. Recent studies proposed models able to generate embeddings for out of vocabulary words given its morphology and its context. These models assume that we have sufficient textual data in hand to train them. In contrast, we specifically tackle the case where such data is not available anymore and we rely only on pre-trained embeddings. As a solution, we introduce a model that predicts meaningful embeddings from the spelling of a word as well as from the context in which it appears for a downstream task without the need of pre-training on a given corpus. We thoroughly test our model on a joint tagging task on three different languages. Results show that our model helps consistently on all languages, outperforms other ways of handling out of vocabulary words and can be integrated into any neural model to predict out of vocabulary words.