As our modern-built structures are becoming increasingly complex, carrying out basic tasks such as identifying points or objects of interest in our surroundings can consume considerable time and cognitive resources. How can a system convert contextual information about a person's physical environment into natural language that may help this person identify relevant entities in this environment and accomplish their tasks?
That is the main question my PhD research has addressed. To achieve this, I designed, implemented, and evaluated interactive systems that can generate effective natural-language discourse in situated context. My interests lie especially in situated natural language processing, spoken dialog systems, and multimodal human-computer interaction. My work in these areas explores logic-based (in particular, automated planning) as well as empirical methods.
I am also interested in problems related to computational semantics, and have in the past worked on textual entailment and the mining of lexical-semantic knowledge from user-generated content.