Embeddings in Natural Language Processing

Eva Maria Vecchi
IMS, University of Stuttgart
DGfS Computational Linguistics Fall School 2019
September 9-13, 2019

Course Abstract

In this course, we will discuss the design and construction of embeddings in Natural Language Processing (NLP) tasks. In particular, we will explore the trend in NLP to capture and use semantic embeddings to accurately approximate a human's ability to understand words and phrases. The course will emphasize both theory and practice, providing a strong background to motivate the use of various techniques, as well as an apt environment to get your hands dirty with real-world implementations.

Theory The course will motivate the use of embeddings. We will provide a background in how we can define meaning in language use, then explore the mathematical structures necessary to model this information within a computationally practical framework. The course will focus on the use, flexibility, and limitations of these structures in NLP tasks, and how best to ground and assess these models.

Practice This course aims to provide an opportunity to study the state of the art in the field of Computational Semantics, specifically Distributional Semantics. We will discuss and experiment with various types of embeddings, from simple count-based representations to neural network implementations, and explore a range of NLP tasks tackled using these methods and discuss future directions left unexplored.


Course Schedule

Theory

Practical

ECTS Credits

Resources/Handouts