Lexicon, Syntax, Semantics II: Modeling Meaning

Eva Maria Vecchi
Center for Information and Language Processing, LMU
Summer semester 2020
Thursdays 10:00-12:00 (Zoom, for now)

Course Abstract

In this course, we will discuss the goal of modeling natural language meaning computationally. In particular, we will explore the trend in Natural Language Processing (NLP) to capture and use semantic embeddings in order to accurately approximate a human's ability to understand words and phrases. The course will emphasize both theory and practice, providing a strong background to motivate the use of various techniques, as well as an apt environment to get your hands dirty with real-world implementations.

Theory The course will provide a background in how we can define meaning in language use, touching on linguistic and cognitive principles. We will review lexical and phrasal semantics, and discuss prominent theories for human acquisition of meaning. We will then explore the mathematical structures necessary to model this information within a computationally practical framework. The course will focus on the use, flexibility, and limitations of these structures in NLP tasks, and how best to ground and assess these models.

Practice This course aims to provide an opportunity to study the state of the art in the field of Computational Semantics, specifically Distributional Semantics. We will discuss and experiment with various types of embeddings, from simple count-based representations to neural network implementations, and explore a range of NLP tasks tackled using these methods and discuss future directions left unexplored.


Course Schedule

Introduction

Semantics in Linguistics

Distributional Semantics

Machine Learning for Meaning Representation

Final Examinations: Oral Presentations (pdf)