Using listener gaze to augment speech generation in a virtual 3D environment

Maria Staudte, Alexander Koller, Konstantina Garoufi, and Matthew Crocker

In Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci), Sapporo, 2012.

Listeners tend to gaze at objects to which they resolve referring expressions. We show that this remains true even when these objects are presented in a virtual 3D environment in which listeners can move freely. We further show that an automated speech generation system that uses eyetracking information to monitor listener's understanding of referring expressions outperforms comparable systems that do not draw on listener gaze.

Download: Download

BibTeX Entry
@InProceedings{give-et-12,
	author = {Maria Staudte and Alexander Koller and Konstantina
		Garoufi and Matthew Crocker},
	title = {Using listener gaze to augment speech generation in a virtual {3D} environment},
	booktitle = {Proceedings of the 34th Annual Meeting of the
		Cognitive Science Society (CogSci)},
	address = {Sapporo},
	year = 2012
}

Back: Publications