Attention and Engagement Aware Multimodal Dialog Systems
Zhou Yu
Language Technology Institute, Carnegie Mellon University
Thursday, August 6, 2015
4:00 p.m., ICSI Lecture Hall
Despite their ability to complete certain tasks, dialog systems still suffer from poor adaptation to users' engagement and attention. We observe human behaviors in different conversational settings to understand human communication dynamics and then transfer the knowledge to multimodal dialog system design. To focus solely on maintaining engaging conversations, we design and implement a non-task oriented multimodal dialog system, which serves as a framework for controlled multimodal conversation analysis. We design computational methods to model user engagement and attention in real time by leveraging automatically harvested multimodal human behaviors, such as smiles and speech volume. We aim to design and implement a multimodal dialog system to coordinate with users' engagement and attention on the fly via techniques such as adaptive conversational strategies and incremental speech production.
Bio:
Zhou(Jo) is a fifth-year Ph.D. student in the Language Technology Institute, Carnegie Mellon University, advised by Prof. Alan W Black and Prof. Alex Rudnicky. Zhou works on multimodal dialog systems, audio-visual processing and human-robot interaction. Zhou got a B.S. in computer science and a B.A. in English language with linguistic focus at Zhejiang University in 2011. She also worked as a research intern at Education Testing Service under the supervision of Prof. David Suendermann-Oeft, at Microsoft Research under the supervision of Eric Horvitz and Dan Bohus, and at Institute for Creative Technologies in USC under supervision of Prof. Louis-Philippe Morency.