# Introduction to Sentence Comprehension ## To be held during the week of 20-26 August 2017, in Litomyšl, Czech Republic This course is part of the [Summer School in Linguistics, at Litomyšl, Czech Republic.](http://www.lingvistika.cz/ssol/2017) ### Overview I will give three lectures (70 minutes including discussion). + Instructor: Shravan Vasishth (`vasishth` at uni-potsdam dot de) + Dates: 22 and 24 August 2017 + Location: [Summer School in Linguistics, at Litomyšl, Czech Republic.](http://www.lingvistika.cz/ssol/2017) + Slides: <a href="http://www.ling.uni-potsdam.de/~vasishth/courses/pdfs/LitomyslVasishth.pdf">lectures 1 and 2</a>, <a href="http://www.ling.uni-potsdam.de/~vasishth/courses/pdfs/LitomyslVasishth2.pdf">lecture 3</a>. <a href="http://www.ling.uni-potsdam.de/~vasishth/courses/pdfs/vasishthgelmantalk.pdf"> Vasishth and Gelman talk slides (if there is time)</a>. ### Lecture 1 When we read or hear a sentence, we immediately start to build syntactic structure incrementally and to interpret what we are hearing. For example, when we read - The lawyer examined ... we attempt to predict upcoming material. If the word "by" appears next: - The lawyer examined by ... it triggers several interesting processes. First, the beginning of the sentrence must now be treated as a reduced relative clause (The lawyer *who was* examined by...), and the lawyer can no longer be considered the one doing the examining. Contrast the above pair of sentences with - The evidence examined ... Here, the appearance of "by" might be expected to be less surprising, because "evidence" may have a lower probability of being something that does the examining. These examples illustrate the kinds of questions sentence comprehension research is concerned with. What parsing algorithms are used by humans? How (if at all) do probabilistic expectations affect parsing difficulty? What (if any) is the role of working memory in parsing? Do we even build structure incrementally? In this introductory lecture, I will talk about the empirical and theoretical issues relating to these issues. As background reading, see [Pickering, M. J., and Van Gompel, R. P. (2006). Syntactic parsing. Handbook of psycholinguistics, 2, 455-503.](https://pdfs.semanticscholar.org/378e/6fb9b7583a31827a010c7925fadfaea3c358.pdf#page=468) ### Lecture 2 In the second lecture, I will talk about one influential line of research in sentence comprehension: the investigation of interference effects. Consider the sentence: - The worker was surprised that **the resident** who was living near the dangerous neighbor **was complaining** about the investigation. Here, the subject of *was complaining* is *the resident*. It is widely believed that connecting these two elements during online parsing involves an associative cue-based retrieval process. One claim is that at the verb *was complaining*, a noun phrase that is animate and the subject of the local clause is seeked out. However, there are other animate nouns in the sentence that could mistakenly be accessed instead of the correct target noun *the resident*. The confusion that arises in parsing due to multiple candidates in memory is called similarity-based interference. We will review the literature supporting (and questioning) this idea. As background reading, see [Lena A. Jäger, Felix Engelmann, and Shravan Vasishth. Similarity-based interference in sentence comprehension: Literature review and Bayesian meta-analysis. Journal of Memory and Language, 94:316-339, 2017.](http://www.sciencedirect.com/science/article/pii/S0749596X17300049) ### Lecture 3 In this final lecture, I will discuss the literature on expectation-based sentence comprehension. [Hale 2001](http://ucrel.lancs.ac.uk/acl/N/N01/N01-1021.pdf) introduced the idea of surprisal, which formally expresses the intuition that rare continuations are surprising. For example, in the example from Lecture 1, - The lawyer examined by ... the transition to "by" is surprising because the reduced relative occurs less frequently. [Levy 2008](http://www.sciencedirect.com/science/article/pii/S0010027707001436) provided the first comprehensive evaluation of this idea. The role of expectations in sentence parsing has a long history predating Hale's work, but over the last 10 years, much effort has been expended in evaluating the predictions of the surprisal account and a related idea, entropy reduction. I will talk about the empirical evidence for and against expectation-based processing accounts from languages like English, German, Persian, Hindi, and Chinese. An example from the recent literature is [Lena A. Jäger, Zhong Chen, Qiang Li, Chien-Jer Charles Lin, and Shravan Vasishth. The subject-relative advantage in Chinese: Evidence for expectation-based processing. Journal of Memory and Language, 79-80:97-120, 2015.](http://www.sciencedirect.com/science/article/pii/S0749596X14001272)