Founded in 1966

Research Spotlight: Diane Litman, Professor

Professor Diane Litman is awarded an NSF Award in the Robust Intelligence and the Human-Centered Computing Programs. An Affect-Adaptive Spoken Dialogue System that Responds Based on User Model and Multiple Affective States is a three year project with co-principal investigator, Katherine Forbes-Riley, Research Associate at the Learning Research and Development Center, University of Pittsburgh.

An Affect-Adaptive Spoken Dialogue System that Responds Based on User Model and Multiple Affective States

There has been increasing interest in affective dialogue systems, motivated by the belief that in human-human dialogues, participants seem to be (at least to some degree) detecting and responding to the emotions, attitudes and metacognitive states of other participants. The goal of the proposed research is to improve the state of the art in affective spoken dialogue systems along three dimensions, by drawing on the results of prior research in the wider spoken dialogue and affective system communities.

First, prior research has shown that not all users interact with a system in the same way; the proposed research hypothesizes that employing different affect adaptations for users with different domain aptitude levels will yield further performance improvement in affective spoken dialogue systems. Second, prior research has shown that users display a range of affective states and attitudes while interacting with a system; the proposed research hypothesizes that adapting to multiple user states will yield further performance improvement in affective spoken dialogue systems. Third, while prior research has shown preliminary performance gains for affect adaptation in semi-automated dialogue systems, similar gains have not yet been realized in fully automated systems. The proposed research will use state of the art empirical methods to build fully automated affect detectors. It is hypothesized that both fully and semi-automated versions of a dialogue system that either adapts to affect differently depending on user class, or that adapts to multiple user affective states, can improve performance compared to non-adaptive counterparts, with semi-automation generating the most improvement.

The research project will advance the state of the art in both spoken dialogue and computer tutoring technologies, while at the same time demonstrating any differing effects of affect-adaptive systems under ideal versus realistic conditions. More broadly, the research and resulting technology will lead to more natural and effective spoken dialogue-based systems, both for tutoring as well as for more traditional information-seeking domains. In addition, improving the performance of computer tutors will expand their usefulness and thus have substantial benefits for education and society.

The NSF Robust Intelligence ( RI ) program encompasses all aspects of the computational understanding and modeling of intelligence in complex, realistic contexts. The RI program advances and integrates the research traditions of artificial intelligence, computer vision, human language research, robotics, machine learning, computational neuroscience, cognitive science, and related areas.

The Human-Centered Computing (HCC) research explores creative ideas, novel theories, and innovative technologies that advance our understanding of the complex and increasingly coupled relationships between people and computing. HCC supports social and behavioral scientists as well as computer and information scientists whose research contributes to the design and understanding of novel computing technologies and systems.

Additional information about Dr. Litman's award can be found on the NSF Division of Information and Intelligent Systems website.

Read the University Times about this award, November 12, 2009.

You can find more information about Dr. Litman's research projects from her personal web page.

You are using an older browser that does not support current Web standards. Although this site is viewable in all browsers, it will look much better in a browser that supports Web standards.