Shravan Vasishth

Professor, Dept. of Linguistics, University of Potsdam, 14476 Potsdam, Germany
Speaker, Language Cluster, Cognitive Science
Phone: +49-(0)331-977-2950 | Fax: - 2087 | Email: vasishth at uni-potsdam.de
GPG public key, Orcid ID, google scholar, github, bitbucket, statistics blog, vasishth lab blog
Back to home page

Sentence Comprehension as a Cognitive Process: A Computational Approach

Instructors: Shravan Vasishth and Felix Engelmann

A one-week course to be taught at ESSLLI 2016, in Bolzano, Italy, 15-19 August 2016, Monday-Friday 11AM-12:30PM.

Motivation: What this course is about

Sentence comprehension, a field of research within psycholinguistics, is concerned with the study of the cognitive processes that unfold when we hear or read a sentence. The focus is on developing theories and models about how sentence comprehension (parsing) works. The last sixty years have seen significant advances in our understanding of sentence comprehension processes. However, the vast majority of this work is experiment-based, with theory-development being largely limited to paper-pencil models. Although we have learnt a great deal from such informal reasoning about cognitive processes, ultimately the only way to test a theory is to implement it as a computational model. This is a standard approach in research on cognition in artificial intelligence, computer science, and mathematical psychology. Indeed, history has shown that the development of different computational cognitive architectures and mathematical models of cognition has had a huge impact in advancing our understanding of cognitive processes. This is because computational and mathematical models force the scientist to make detailed commitments, which can then be tested empirically. The present course brings together these two cultures: informally developed theories of sentence comprehension, and computational/mathematical models of cognition. We develop a series of accounts of sentence comprehension within a specific cognitive architecture that has been developed for modeling general cognitive processes, the ACT-R architecture (version 6.0). ACT-R is a good choice because it is a mature architecture that has been widely used in artificial intelligence, human-computer interaction, psychology, and psycholinguistics to study cognitive processes. Some of the issues that have been addressed empirically in sentence comprehension research are: (a) the influence of individual differences in capacity on parsing, (b) the role of parsing strategy, including task-dependent underspecification, (c) the role of probabilistic expectations, (d) the interaction between grammatical knowledge and parsing, (e) the interaction between the eye-movement control system and sentence comprehension, and (f) how impaired processing might arise (e.g., in aphasia). We address all of these topics by presenting computational models of representative phenomena using the ACT-R framework. The source code for the model is already freely available on github:

  1. The ACT-R Parser extended with eye-movement control
  2. This will not be needed in the course but is provided for your reference: Version 1 of ACT-R simulation of retrieval process using R (Felix Engelmann)
  3. This will not be needed in the course but is provided for your reference: Version 2 of ACT-R simulation of retrieval process using R (Bruno Nicenboim)

Tentative outline

  1. Day 1: Review of the state of the art in theories of sentence comprehension We review the classical theories of sentence comprehension, and then some of the more recent proposals (e.g., good enough processing, unrestricted race model). We discuss the empirical coverage of these alternative theories, and the connection with general theories of cognitive processes.
    Slides: click here.
  2. Day 2: Modeling sentence comprehension as a cognitive process: An introduction to the framework Here, we take up an approach that we have explored in our own research: using the ACT-R architecture to model human sentence comprehension processes. In order to allow the student to use the models presented in the rest of the course, the lecture will provide a high-level and largely self-contained introduction to building sentence comprehension models in the cognitive architecture ACT-R. First, we will provide a quick introduction to ACT-R. Then, the revised implementation of the original Lewis and Vasishth 2005 model will be presented in detail.
    ACT-R intro slides (SV): click here.
    Sentence processing model intro slides (FE): click here.
  3. Day 3: Modeling the eye-parser connection The ACT-R architecture has the capability to ask questions that until recently have not been addressed computationally. One important example is the eye-parser connection: does the parsing system interact with the eye-movement control system, and if so, what is the nature of this interaction? In this section, we briefly introduce the coupling of the eye-movement control system and the parser within ACT-R, and how this coupling can be parametrically varied to explore different assumptions.
    Slides: click here (SV), click here (FE).
  4. Day 4: The empirical coverage of the framework Our main focus will be on applying the architecture described above to model a range of experimental results in the literature relating to classical phenomena such as interference, locality/anti-locality, garden-pathing. We also discuss implementations of good-enough processing and the unrestricted race model, models of individual differences in parsing, and models of impaired sentence comprehension in aphasic patients.
    Slides: click here (SV), click here (FE; these are the same slides as for day 3).
  5. Day 5: Empirical coverage continued, and open research questions We will present the work described in this paper by Engelmann, Jaeger, and Vasishth, 2016 (in revision following review), and discuss the open questions that this work raises.
    Slides: click here (FE).

Expected level and prerequisites

We assume basic knowledge of programming constructs and of introductory-level knowledge of linguistic theory (primarily syntax).

What you need to do to prepare for the class

  1. We will assume that you will have your own laptop with you. We have experience with Linux and OS X environments and can help you set up your computer if you run into problems. We have less experience with Windows, but may be able to get you started nevertheless.
  2. Install Clozure Common Lisp.
  3. Please follow this link and install ACT-R 6.0 and the associated model.
  4. Please also install the programming environment R, as this will be needed for postprocessing model output. Please also install the R packages reshape (tidyr, dplyr), ggplot2. You can install a package from the R console. For example, to install ggplot2, type:
    install.packages("ggplot2")
    
  5. Install the R package em2. To do this, download the tarred, gzipped archive, change to the directory where you downloaded the archive to, and then type the following on the command line (in Macs and Linux environments), possibly as superuser:
    > R CMD INSTALL em2_0.9.tar.gz
    

Suggested further reading

We aim to provide slides, with exercises. No other material is required for this course, other than the accompanying slides and code. Suggested further reading from our group is listed below.
  1. An activation-based model of sentence processing as skilled memory retrieval. Richard L. Lewis and Shravan Vasishth. Cognitive Science, 29:1-45, May 2005.
  2. Computational principles of working memory in sentence comprehension. Richard L. Lewis, Shravan Vasishth, and Julie Van Dyke. Trends in Cognitive Sciences, 10(10):447-454, 2006.
  3. A framework for modeling the interaction of syntactic processing and eye movement control. Felix Engelmann, Shravan Vasishth, Ralf Engbert, and Reinhold Kliegl. Topics in Cognitive Science, 5(3):452-474, 2013.
  4. The determinants of retrieval interference in dependency resolution: Review and computational modeling. Felix Engelmann, Lena A. Jäger, and Shravan Vasishth. Under revision following review.
  5. A computational evaluation of sentence comprehension deficits in aphasia. Umesh Patil, Sandra Hanne, Frank Burchert, Ria De Bleser, and Shravan Vasishth. Cognitive Science, 40:5–50, 2016.
  6. When high-capacity readers slow down and low-capacity readers speed up: Working memory differences in unbounded dependencies. Bruno Nicenboim, Pavel Logačev, Carolina Gattei, and Shravan Vasishth. Frontiers in Psychology, 7(280), 2016. Special Issue on Encoding and Navigating Linguistic Representations in Memory.
  7. Working memory differences in long distance dependency resolution. Bruno Nicenboim, Shravan Vasishth, Reinhold Kliegl, Carolina Gattei, and Mariano Sigman. Frontiers in Psychology, 2015. Special Issue on Encoding and Navigating Linguistic Representations in Memory.
  8. Understanding underspecification: A comparison of two computational implementations. Pavel Logačev and Shravan Vasishth. Quarterly Journal of Experimental Psychology, 69(5):996-1012, 2016.
  9. What is the scanpath signature of syntactic reanalysis?. Titus von der Malsburg and Shravan Vasishth. Journal of Memory and Language, 65:109-127, 2011.
  10. A Multiple-Channel Model of Task-Dependent Ambiguity Resolution in Sentence Comprehension. Pavel Logačev and Shravan Vasishth. Cognitive Science 2015.
  11. Scanpaths reveal syntactic underspecification and reanalysis strategies. Titus von der Malsburg and Shravan Vasishth. Language and Cognitive Processes, 28(10):1545-1578, 2013.
  12. Parallel processing and sentence comprehension difficulty. Marisa F. Boston, John T. Hale, Shravan Vasishth, and Reinhold Kliegl. Language and Cognitive Processes, 26(3):301-349, 2011.
  13. Processing Polarity: How the ungrammatical intrudes on the grammatical. Shravan Vasishth, Sven Bruessow, Richard L. Lewis, and Heiner Drenhaus. Cognitive Science, 32(4), 2008.
  14. Argument-head distance and processing complexity: Explaining both locality and antilocality effects. Shravan Vasishth and Richard L. Lewis. Language, 82(4):767-794, 2006.
  15. Human language processing: Symbolic models. Shravan Vasishth and Richard L. Lewis. In Keith Brown, editor, Encyclopedia of Language and Linguistics, volume 5, pages 410-419. Elsevier, 2006.