|
People:
Director
- Diane
Litman, Professor of
Computer Science, Senior Scientist at LRDC, Co-Director of
Graduate Program in Intelligent Systems
PhD Students
- Tazin Afrin, Computer Science
- Luca Lugini, Computer Science
- Ahmed Magooda,
Computer Science
- Ravneet Singh,
Computer Science
- Mingzhi Yu,
Computer Science
-
Haoran (Colin) Zhang, Computer Science
Undergraduate Students
- Sonia Cromp, Pitt BPhil
Current Faculty Collaborators
- Prof. Richard Correnti,
School of Education, LRDC
- Prof. Amanda Godley, School of Education, LRDC
- Prof. Rebecca Hwa, Computer Science, Intelligent Systems
- Prof. Lindsay Clare Matsumura, School of Education, LRDC
-
Prof. Muhsin Menekse. Purdue Univeristy
- Prof. Tim Nokes-Malach, Psychology, LRDC
- Dr. Susannah Paletz, University of Maryland
- Prof. Erin Walker, Computer Science, LRDC
Alumni: Research Associates / Postdocs
-
Dr. Carrie
Demmans Epp,
LRDC and CIDDE (now at University of Alberta)
- Dr. Kate
Forbes-Riley, LRDC
- Dr. Joel Tetreault,
LRDC (now at Grammarly)
-
Dr. Yao Xiong,
LRDC and CIDDE
Alumni: PhD Students
- Hua Ai, Intelligent
Systems, PhD 2009:
User
Simulation for Spoken Dialog System Development (now Manager of Data Science at Delta Airlines)o
- Min Chi,
Intelligent Systems, PhD 2009:
Do
Micro-Level Tutorial Decisions Matter: Applying Reinforcement
Learning To Induce Pedagogical Tutorial Tactics (now Associate
Professor, NC State University)
- Michael Lipschultz,
Computer Science, Phd 2015:
Adapting the
Scheduling of Illustrations and Graphs to Learners in Conceptual
Physics Tutoring (now Data Scientist, Naval Air Systems Command
- Wencan Luo, Computer
Science, PhD 2017:
Automatic Summarization for Student Reflective Responses (now at Google)
- Huy Nguyen, Computer
Science, PhD 2017: Context-aware Argument Mining and Its
Application in Education (now at AppZen)
- Zahra Rahimi, Intelligent Systems, PhD 2019: Linguistic Entrainment in Multi-Party Spoken Dialogues (now at Pandora)
- Mihai
Rotaru, Computer Science, PhD 2008:
Applications of Discourse Structure for Spoken Dialogue Systems
(now Head of Research and Development, Textkernel)
- Arthur Ward, Intelligent
Systems, PhD 2010: Reflection and Learning Robustness in a Natural Language Conceptual Physics Tutoring System
- Wenting Xiong,
Computer Science, PhD 2015:
Helpfulness-Guided Review
Summarization (now at Facebook)
- Fan Zhang, Computer
Science, PhD 2017:
Towards Building an
Intelligent Revision Assistant for
Argumentative Writings (now at Google)
Alumni: Masters Students
-
Heather
Friedberg, Computer Science (MS Project: Lexical Entrainment
and Success in Student Engineering Groups), 2012 (now at Google)
- Beatriz Maeireizo-Tokeshi,
Computer Science (MS Project: Applying Co-training for Predicting
Student Emotions with Spoken Dialogue Data), 2005
- Amruta Purandare, Intelligent Systems
Alumni: Undergraduate Students (NSF
Research Experience for Undergraduate Program; University of
Pittsburgh First Experiences in Research Program)
- Cassandra Boutin (Nonprofit Management, Psychology, Spanish)
- Alexandra Brusilovsky, became graduate student, CMU
- Samantha Corcoran, became graduate student, University of Pittsburgh
- Zane Denmon, First Experiences in Research Program
- Joanna
Drummond, PhD 2017, University of Toronto (now at Intel)
-
Heather Friedberg, MS 2012, University of Pittsburgh (see above)
- Simran Gidwani, NSF REU
- Paige Haring (Applied Mathematics, Computer Science, Linguistics)
- Anish Kumar, Linguistics & Spanish (now at Amazon)
- Mackenzie Marcinko, Pitt First Experiences in Research Program
- Gregory Nicholas,
became graduate student, Brown University
- Nathan Ong, now graduate student, University of Pittsburgh
- Rehana Saifee, Pitt First Experiences in Research Program, NSF REU
- Sarah Serfilippi, Pitt First Experiences in Research Program
- Chris Thomas, now graduate student, University of Pittsburgh
- Jesse Thomason, now Postdoctoral Researcher, University of Washington
- Zhengming Wang, Pitt First Experiences in Research Program
- Robert Wei
- Shujun Yang, First Experiences in Research Program
- Yu Yang, Pitt First Experiences in Research Program
- Zinan Zhuang (Computer Science)
Other Alumni
- Stefani Allegretti, research staff, LRDC
- Alison Huettner, consultant
- Caitlin
Rice, graduate student researcher, LRDC and Psychology
- Scott Silliman, programmer, LRDC
Past Visitors
- Matthew Frampton, PhD Student, University of Edinburgh
- Reva Freedman, Professor,
Northern Illinois University
- Ryoko Tokuhisa, Toyota Central R&D Labs (also PhD Student at
Nara Institute of Science and Technology)
News:
- March 2020: Provost's Doctoral Mentoring Award
Recipient
- December 2019: Featured in What will the 2020s Bring for AI?
- November 2019: Update to Teams Corpus (multi-party audio now with dialogue transcripts and self-report performance measures) is available
for scientific
purposes (download)
- October 2019: Diane Litman as Panelist, AI in Education, German Mittelstand East Coast Industry Forum
- October 2019: New
NSF grant awarded (on collaborative argumentation)
- July 2019: LRDC internal grant awarded (on collaborative dialogue with a teachable robot)
- May 2019: Zahra Rahimi defends her dissertation! (the defense)
- October 2018:
39
women doing amazing research in computational social
science, Pittwire
- July 2018: New
IES
and new NSF grants awarded (on summarization and
collaborative argumentation, respectively)
- May 2018: Diane Litman was
invited to the
AI
for Good Global Summit organized by the United Nation's ITU
- February 2018: Diane Litman as Panelist, Next Big Steps in AI for Education,
8th Symposium on Educational Advances in Artificial Intelligence
- December 2017: Diane Litman was selected as
Fellow, Association for Computational Linguistics
- October 2017: Diane Litman was elected to the Executive Committee of the
International Artificial Intelligence in
Education Society
- October 2017: Zahra Rahimi defends her Phd proposal!
(Entrainment Measures for Multi-Party Spoken Dialogues)
- Summer 2017: Our forthcoming paper has been shortlisted
for an ISCA Best Student Paper Award (Zahra Rahimi, Anish Kumar,
Diane Litman, Susannah Paletz and Mingzhi Yu, Entrainment in
Multi-Party Spoken Dialogues at Multiple Linguistic Levels,
Proceedings Interspeech, Stockholm, Sweden, August 2017.)
- April 2017: Wencan Luo, Huy Nguyen, and Fan
Zhang all defend their dissertations!
- March 2017: Congratulations to
Zahra Rahimi for being
awarded a Mellon Fellowship for the 2017-2018 academic year. Andrew
Mellon Predoctoral Fellowships are awarded to students of exceptional
ability and promise who are enrolled or wish to enroll at the
University of Pittsburgh in programs leading to the Ph.D. in various
fields of the humanities, the natural sciences and the social
sciences.
- Archived News
Data/Code:
- ArgRewrite: annotated
revisions for studying argumentative writing
(download)
- CourseMirror: summaries of student reflections (download)
- Revision Quality: annotated
revisions for studying sentence-level revision
improvement in argumentative writing
(download)
- Teams: multi-party and multi-modal dialogue entrainment
(download)
- Uncertainty: annotated student turns in tutorial dialogue
(download)
- CO-ATTN: code for "Co-Attention Based
Neural Network for Source-Dependent Essay Scoring"
(GitHub)
Currently Funded Projects:
Response-to-Text Tasks to Assess Students' Use of Evidence and Organization in Writing: Using Natural Language Processing for Scoring Writing and Providing Feedback At-Scale
(September 2016 - August 2019)
Researchers for this project will develop and validate an automated assessment of students' analytic writing skills in response to reading text. During prior work the researchers studied an assessment of students' analytic writing to understand progress toward outcomes in the English Language Arts Common Core State Standards, and to understand effective writing instruction by teachers. The researchers focused on response-to-text assessment because: it is an essential skill for secondary and post-secondary success; current assessments typically examine writing outside of responding to text; and increased attention on analytic writing in schools will result in improved interventions. Recent advances in artificial intelligence offer a potential way forward through automated essay scoring of students' analytic writing at-scale and feedback to improve writing and in the teaching instruction. This IES grant is in collaboration with
Rip Correnti and
Lindsay Clare Matsumura.
Development of Human Language
Technologies to Improve Disciplinary Writing and Learning
through Self-Regulated Revising
(September 2017 - July 2020)
Writing and revising are essential parts of learning, yet many college
students graduate without
demonstrating improvement or mastery of academic writing. This project
explores the feasibility
of improving students' academic writing through a revision environment
that integrates natural
language processing methods, best practices in data visualization and
user interfaces, and
current pedagogical theories. The environment will support and
encourage students to develop
self-regulation skills that are necessary for writing and revising,
including goal-setting, selection
of writing strategies, and self-monitoring of progress. As a learning
technology, the environment
can be applied on a large scale, thereby improving the writing of
diverse student populations,
including English learners.
Three stages of investigation are planned. First, to analyze data on
students' revision behaviors,
a series of experiments are conducted to study interactions between
students
and variations of the revision writing environment. Second, the
collected data forms the gold
standard for developing an end-to-end system that automatically
extracts revisions between
student drafts and identifies the goal for each revision. Multiple
extraction algorithms are
considered, including phrasal alignment based on semantic similarity
metrics and deep learning
approaches. To identify the goal of a revision, a supervised
classifier is trained from the gold
standard. A diverse set of features and the representations of the
identified goals (e.g.,
granularity, scope) are explored. In addition to the
"extract-then-classify" pipeline, an alternative
joint sequence labeling model is also developed. The labeling of
sequences is used to
recognize revision goals and the sequences are mutated to generate
possible corrections of
sentence alignments for revision extraction. The writing environment
is iteratively refined,
augmenting the interface prototyping through frequent user
studies. Third, a complete
end-to-end system that integrates the most successful component models
is deployed in
college-level writing classes. Student progress is tracked across
multiple assignments.
This NSF grant is in collaboration with
Amanda Godley and
Rebecca Hwa.
Enhancing Undergraduate STEM Education by
Integrating Mobile Learning Technologies with Natural Language Processing
(August 2018 - August 2022)
In this project, the researchers will refine an existing mobile
application, CourseMIRROR, for use in postsecondary STEM lecture
courses. This application aims to improve deep learning by encouraging
students to reflect on course content and receive immediate feedback
on their reflections. Often, in large lecture courses, students'
ability to reflect on course content and get feedback on these
reflections is limited by class size and instructor availability. At
the same time, instructors often don't have access to students'
reflections, so they cannot correct misunderstandings or build on
class knowledge. By leveraging natural language processing and mobile
learning technologies, CourseMIRROR aims to overcome these barriers
and help students and instructors gain insights into what was or was
not learned.
This IES grant is in collaboration with
Muhsin Meneske and Ala Samarapungavan (Purdue).
Discussion Tracker: Development of Human Language Technologies
to Improve the Teaching of Collaborative Argumentation in High School
English Classrooms
(September 2018 - September 2022)
Collaborative argumentation, or the building of evidence-based,
reasoned knowledge and solutions through dialogue, is essential to
individual learning as well as group
problem-solving. Student-centered discussions and elaborated student
talk during collaborative argumentation are indicators of robust
learning opportunities in STEM and other disciplines. Furthermore,
the ability to engage in collaborative problem-solving is a
foundational skill in STEM fields and a defining characteristic of
21st century workplaces, especially those in the technology and
engineering fields, and a skill that employers report that few
recent hires possess. However, teaching collaborative argumentation
is an advanced skill that many high school teachers struggle to
develop. We aim to develop an innovative technology called
Discussion Tracker, a web-based system that leverages recent
advances in human language technologies (HLT) to provide teachers
with automatically generated data about the quality of students?
collaborative argumentation in their classrooms and to support
teachers? learning about collaborative argumentation. This NSF
grant is in collaboration with
Amanda
Godley.
Studying Collaborative Dialogue with a Teachable Robot in a Mathematics Domain
(July 2019 - June 2020)
In this project, we are interested in positioning the
robot as a tutee that one or more human learners teach about a
subject domain using spoken dialogue. This project proposes to
build on existing learning sciences work within the exciting
research spaces of teachable agents, human-robot interaction,
spoken dialogue interaction, and collaborative learning, by
examining a more complex scenario: multiple students
collaborate with a robot through spoken interaction.
This is an internal LRDC grant, and is in collaboration with Erin Walker and Tim Nokes-Malach.
Completed Projects:
Adding Spoken Language to a Text-Based Dialogue
Tutor (Nov. 2003 - Sep. 2006)
The goal of this research is to generate an
empirically-based understanding of the ramifications of adding spoken
language capabilities to text-based dialogue tutors, and to understand
how these implications might differ in human-human and human-computer
spoken interactions. This research will explore the relative effectiveness
of speech versus text-based tutoring in the context of ITSPOKE, a
speech-based dialogue system that uses a text-based system for
tutoring conceptual physics (VanLehn et al, 2002) as its
``back-end.'' The results of this work will demonstrate whether
spoken dialogues yield increased performance compared to text with
respect to a variety of evaluation measures, whether the same or
different student and tutor behaviors correlate with learning gains in
speech and text, and how such findings generalize both across and
within human and computer tutoring conditions. These results will
impact the development of future dialogue tutoring systems
incorporating speech, by highlighting the performance gains that can
be expected, and the requirements for achieving such gains.
TuTalk: Infrastructure for authoring and
experimenting with natural language dialogue in tutoring systems and
learning research (project homepage) (Dec. 2004-
Nov. 2006)
The focus of our proposed work
is to provide an infrastructure that will allow learning researchers
to study dialogue in new ways and for educational technology
researchers to quickly build dialogue based help systems for their
tutoring systems. Most tutorial dialogue systems that to date have
undergone successful evaluations (CIRCSIM, AutoTutor, WHY-Atlas, the
Geometry Explanation Tutor) represent development efforts of many
man-years. These systems were instrumental in pushing the technology
forward and in proving that tutorial dialogue systems are feasible and
useful in realistic educational contexts, although not always provably
better on a pedagogical level than the more challenging alternatives
to which they have been compared. We are now entering a new phase in
which we as a research community must not only continue to improve the
effectiveness of basic tutorial dialogue technology but also provide
tools that support investigating the effective use of dialogue as a
learning intervention as well as application of tutorial dialogue
systems by those who are not dialogue system researchers. We propose
to develop a community resource to address all three of these problems
on a grand scale, building upon our prior work developing both basic
dialogue technology and tools for rapid development of running
dialogue systems. This grant is led by
Pamela Jordan at the
University of Pittsburgh
and Carolyn Rose
at Carnegie Mellon University.
Does Treating Student Uncertainty as a Learning
Impass Improve Learning in Spoken Dialogue Tutoring (Oct. 2006 - May 2007)
Most existing tutoring systems respond based only on the correctness
of student answers. Although the tutoring community has shown that
incorrectness and uncertainty both represent learning impasses (and
thus opportunities to learn), and has also shown correlations between
uncertainty and learning, to date there have been very few controlled
experiments investigating whether system responses to student
uncertainty improve learning. We thus propose a small controlled study
to test whether this hypothesis holds true, under "ideal" system
conditions. The study uses a Wizard of Oz (WOZ) version of a
qualitative physics spoken dialogue tutoring system, where the human
Wizard performs speech recognition, natural language understanding,
and recognition of uncertainty, for each student answer. In the
experimental condition, the Wizard then tells the system that correct
but uncertain answers are incorrect, causing the system to respond to
both uncertain and incorrect student answers in the same way, namely
with further dialogue, thereby reinforcing the student's
understanding of the principle(s) under discussion. In the first
control condition, the system responds only to incorrect student
answers in this way. In the second control condition, the
system responds to a percentage of correct answers in this way, to
control for the additional tutoring in the experimental
condition.
Monitoring Student State in Tutorial Spoken Dialogue (Sep. 2003
- Aug. 2007)
This research investigates the feasibility and utility of monitoring
student emotions in spoken dialogue tutorial systems. While human
tutors respond to both the content of student utterances and
underlying perceived emotions, most tutorial dialogue systems cannot
detect student emotions, and furthermore are text-based, which may
limit their success at emotion prediction. While there has been
increasing interest in identifying problematic emotions
(e.g. frustration, anger) in spoken dialogue applications such as call
centers, little work has addressed the tutorial domain. The PIs are
investigating the use of lexical, syntactic, dialogue, prosodic and
acoustic cues to enable a computer tutor to automatically predict and
respond to student emotions. The research is being performed in the
context of ITSPOKE, a speech-based tutoring dialogue system for
conceptual physics. The PIs are recording students interacting with
ITSPOKE, manually annotating student emotions in these as well as in
human-human dialogues, identifying linguistic and paralinguistic cues
to the annotations, and using machine learning to predict emotions
from potential cues. The PIs are then deriving strategies for adapting
the system's tutoring based upon emotion identification.
The major scientific contribution will be an understanding of whether
cues available to spoken dialogue systems can be used to predict
emotion, and ultimately to improve tutoring performance. The results
will be of value to other applications that can benefit from
monitoring emotional speech. Progress towards closing the performance
gap between human tutors and current machine tutors will also expand
the usefulness of current computer tutors. This grant is in
collaboration with Julia
Hirschberg and her group at Columbia University.
Tutoring Scientific Explanations via Natural
Language Dialogue (project homepage)
(Jan. 2004 - Dec. 2007)
It is widely acknowledged, both in academic studies and the
marketplace, that the most effective form of education is the
professional human tutor. A major difference between human tutors and
computer tutors is that only human tutors understand unconstrained
natural language input. Recently, a few tutoring systems have been
developed that carry on a natural language (NL) dialogue with
students. Our research problem is to find ways to make NL-based
tutoring systems more effective. Our basic approach is to derive new
dialogue strategies from studies of human tutorial dialogues,
incorporate them in an NL-based tutoring system, and determine if they
make the tutoring system more effective. For instance, some studies
are determining if learning increases when human tutors are
constrained to follow certain strategies. In order to incorporate the
new dialogue strategies into our existing text and spoken NL-based
tutoring systems, two completely new modules are being developed. One
new module will interpret student utterances using a large directed
graph of propositions called an explanation network, which is halfway
between the shallow and deep representations of knowledge that are
currently used. The second new module uses machine learning to improve
the selection of dialogue management strategies. The research is thus
a multidisciplinary effort whose intellectual merit lies in new
results in the cognitive psychology of human tutoring, in the
technology of NL processing, and in the design of effective tutoring
systems. Improved NL-based tutoring systems could have a broad impact
on education and society. This grant is in
collaboration with Kurt VanLehn,
Micheline Chi, and
Pamela Jordan at the
Learning Research and Development Center, University of Pittsburgh,
and with
Carolyn Rose
(now at CMU).
Cohesion in Tutorial Dialogue and its Impact on
Learning (Oct. 2006 - June 2009)
Research on the factors that make one-on-one tutoring a very effective
mode of instruction has converged on an important finding: that the
critical term in "tutorial interaction" is "interaction." That is,
what the tutor says or does during tutoring, and what the student says
or does are less important than the dynamic, coordinated interplay
between their dialogue turns. It is now important to identify the
discourse mechanisms that drive highly interactive human tutoring, so
that these mechanisms can be simulated by natural-language dialogue
engines in intelligent tutoring systems (ITSs). In the first stage of
this project, we will analyze a corpus of naturalistic tutorial
dialogues to accomplish this goal. Specifically, we will identify the
mechanisms that achieve cohesion in tutorial dialogues, since highly
interactive tutorial dialogue is intrinsically highly cohesive. In
the second stage, we will run a series of controlled studies to test
the hypothesis that more highly cohesive tutorial dialogue is more
effective for promoting learning than less cohesive dialogue, and to
assess the effectiveness of a few selected mechanisms of cohesion.
Finally, in the third stage of the project, we will explore the extent
to which database tools developed by the computational linguistics
community (e.g., WordNet and FrameNet) can automatically tag cohesion
in tutorial dialogue. We will also extend these tools and develop
algorithms that will allow them to be used to automatically generate
cohesive tutor turns for a small sample of student turns, as a first
step towards developing a natural-language dialogue engine that can
use these tools to generate highly cohesive tutorial dialogue.
This grant is in collaboration with
Sandra Katz.
Improving Learning from Peer Review with NLP and ITS Techniques
(July 2009 - June 2011)
SWoRD is a web-based system to support peer reviewing in a wide variety of
disciplinary classroom settings. One result of prior research with SWoRD
is an enormous database of written materials that are ripe for analysis
and exploitation in support of research on natural language processing
(NLP), intelligent tutoring systems (ITS), cognitive science, educational
data mining, and improving learning from peer review. In this project we
will both analyze existing SWoRD-generated data, and develop an improved
version of SWoRD for use in further experimentation. In particular, we
will explore using SWoRD to teach substantive skills in domains involving
ill-defined problems, and will explore techniques for automatically
identifying key concepts and flagging issue understanding. Second, given a
SWoRD toolkit of what can be accomplished robustly with peer interactions,
we will explore the use of natural language processing to automatically
support and improve those interactions. Finally, we will develop a new
version of the SWoRD program that incorporates improved features and
control facilities, and that incorporates Artificial Intelligence
techniques to improve learning in a variety of ways. This is an internal LRDC grant, and is in collaboration with
Christian Schunn and Kevin Ashley.
Adapting to Student Uncertainty over and above Correctness in A Spoken Tutoring Dialogue System
(Sep. 2006 - Aug. 2011)
This research investigates whether responding to student uncertainty
over and above correctness improves learning during computer
tutoring. The investigation is performed in the context of a spoken
dialogue tutoring system, where student speech provides many
linguistic cues (e.g. intonation, pausing, word usage) that
computational linguistics research suggests can be used to detect
uncertainty. Intelligent tutoring systems research suggests that
uncertainty is part of the learning process, and has hypothesized that
to increase system effectiveness, it is critical to respond to more
than correctness. However, most existing tutoring systems respond only
to student correctness, and few controlled experiments have yet
investigated whether also responding to uncertainty can improve
learning.
This research designs and implements two different enhancements to the
spoken dialogue tutoring system, to test two hypotheses in the
tutoring literature concerning how tutors can effectively respond to
uncertainty over and above correctness. The first hypothesis is that
student uncertainty and incorrectness both represent learning
impasses, i.e., opportunities to improve understanding. This
hypothesis is addressed with an enhanced system version that treats
uncertainty in the same way that incorrectness is currently treated
(i.e., with additional subdialogue to increase understanding). The
second hypothesis is that more optimal responses can be developed by
modeling how human tutor responses to correctness change when the
student is uncertain. This hypothesis is addressed by analyzing human
tutor dialogue act responses (i.e. content and presentation) to
student uncertainty over and above correctness in an existing tutoring
corpus, then implementing these responses in a second enhanced system
version. Two controlled experiments are then performed. The first
tests the relative impact of the two adaptations on learning using a
Wizard of Oz version of the system, with a human (Wizard) detecting
uncertainty and performing speech recognition and language
understanding. The second experiment tests the impact of the
best-performing adaptation from the first experiment in the context of
the real system, with the system processing the speech and language
and detecting uncertainty in a fully automated manner.
The major intellectual contribution of the research is to demonstrate
whether significant improvements in learning are achieved by adapting
to student uncertainty over and above correctness during tutoring, to
advance the state of the art by fully automating and evaluating user
uncertainty detection and adaptation in a working spoken dialogue
system, and to investigate any different effects of this adaptation
under ideal versus actual system conditions.
This NSF grant is in collaboration with Kate Forbes-Riley.
Improving a Natural-language Tutoring System
That Engages Students in Deep Reasoning Dialogues About Physics
(project homepage) (June 2010-May 2013)
Recent studies show that U.S. students lag behind students in other developed countries in math and science. Because one-on-one tutoring has been shown to be a highly effective form of instruction, many educators and education policy makers have looked to intelligent tutoring systems (ITSs) as a means of providing cost-effective, individualized instruction to students that can improve their conceptual understanding of, and problem-solving skills in, math and science. However, even though many ITSs have been shown to be effective, they are still not as effective as human tutors.
The goal of this Cognition and Student Learning development project is to take a step towards meeting President Obama's challenge to produce "learning software as effective as a personal tutor." We will do this by building an enhanced version of a natural-language dialogue system that engages students in deep-reasoning, reflective dialogues after they solve quantitative problems in Andes, an intelligent tutoring system for physics. Improvements to this system will focus on addressing a key limitation of natural-language (NL) tutoring systems: although these systems are "interactive" in the sense that they try to elicit explanations from students instead of lecturing to them, automated tutors do not align their dialogue turns with those of the student to the same degree, and in the same ways, that human tutors do. In particular, automated tutors often fail to reuse parts of the student's dialogue turns in their own turns, to adjust the level of abstraction that the student is working from when the student is over-generalizing or missing important distinctions between concepts, and to abstract or specialize correct student input when doing so might enhance the student's understanding. Empirical research shows that these forms of lexical and semantic alignment in human tutoring predict learning. The main outcome of this development effort will be a fully working, prototype reflective dialogue version of Andes that can carry out these functions and serve as a research platform for a future study that compares the effectiveness of the enhanced NL tutoring system with the current system, which lacks these alignment capabilities--thereby allowing us to test the hypothesis that it is not interaction per se that explains the effectiveness of human tutoring, but how it is carried out.
The enhanced version of this reflective dialogue system will be developed through an iterative process of preparing a prototype for experienced physics teachers and students to try out using the "Wizard of Oz" paradigm, identifying cases in which the system does not work as intended (e.g., the tutor prompts the student to generalize or make distinctions when this is not warranted by the discourse context), refining the software to correct these problems, and testing the revised software in a subsequent field trial. The subject pool for these trials will be students enrolled in a first-year physics course at the University of Pittsburgh and high school students taking physics in Pittsburgh urban and suburban schools. During the third (final) year of the project, we will collect pilot data that addresses the feasibility of implementing the system in authentic high school physics classes, and the promise of the system to increase students' conceptual understanding of physics and ability to solve physics problems. The latter will be determined by comparing students' pre- and post-test performance on measures of conceptual understanding and problem-solving ability in physics, and by comparing the performance of students who use the current and enhanced version of the system on these measures.
This IES grant is in collaboration with
Sandra Katz,
Pamela Jordan, and
Michael Ford.
Keeping Instructors Well-Informed in Computer-Supported Peer Review
(project homepage)
(June 2011-June 2013)
From the instructor's viewpoint, a class writing assignment is a black box. Until instructors actually read the first or final drafts, they do not have much information about how well the assignment has succeeded as a pedagogical activity, and even then, it is hard to get a complete picture. Computer-supported peer review systems such as SWoRD, a scaffolded peer review system can help students to write higher quality compositions in classroom assignments, can help in this regard. The goal of this project is to develop and evaluate methods to provide instructors with a comprehensive overview of the progress of a class writing assignment in terms of how well students understand the issues based on structured reviewing rubrics, feedback students provide and receive in the peer review process, and machine learning computational lingustics analysis of the resulting texts. The SWoRD-based peer-review system will present the instructor's overview via a kind of "Teacher-side Dashboard" that will summarize salient information for the class as a whole, cluster students based on common features of their texts, and enable instructors to delve into particular student's writings more effectively in a guided manner.
This is an internal LRDC grant, and is in collaboration with
Kevin Ashley, Christian Schunn and Jingtao Wang.
RI: Small: An Affect-Adaptive Spoken Dialogue System that Responds Based on User Model and Multiple Affective States
(September 2009 - August 2013)
There has been increasing interest in affective dialogue systems, motivated by the belief that in human-human dialogues, participants seem to be (at least to some degree) detecting and responding to the emotions, attitudes and metacognitive states of other participants. The goal of the proposed research is to improve the state of the art in affective spoken dialogue systems along three dimensions, by drawing on the results of prior research in the wider spoken dialogue and affective system communities. First, prior research has
shown that not all users interact with a system in the same way; the proposed research hypothesizes that employing different affect adaptations for users with different domain aptitude levels will yield further performance improvement in affective spoken dialogue systems. Second, prior research has shown that users display a range of affective states and attitudes while interacting with a system; the proposed research hypothesizes that adapting to multiple user states will yield further performance improvement in affective spoken dialogue systems. Third, while prior research has shown preliminary performance gains for affect adaptation in semi-automated dialogue systems, similar gains have not yet been realized in fully automated systems. The proposed research will use state of the art empirical methods to build fully automated affect detectors. It is hypothesized that both fully and semi-automated versions of a dialogue systemthat either adapts to affect differently depending on user class, or that adapts to multiple user affective states, can improve performance compared to non-adaptive counterparts, with semi-automation generating the most improvement. The three hypotheses will be investigated in the context of an existing spoken dialogue tutoring system that adapts to the user state of uncertainty. The task domain is conceptual physics typically covered in a first-year physics course (e.g., Newtons Laws, gravity, etc.). To investigate the first hypothesis, a first enhanced system version will be developed; it will use the existing uncertainty adaptation for lower aptitude users with respect to domain knowledge, and a new uncertainty adaptation will be developed and implemented to be employed for higher aptitude users. To investigate the second hypothesis, a second enhanced systemversion will be developed; it will use the existing uncertainty adaptation for all turns displaying uncertainty, and a new disengagement adaptation will be developed and implemented to be employed for all student turns displaying a second state of disengagement. A controlled experiment with the two enhanced systems will then be conducted in a Wizard-of-Oz (WOZ) setup, with a human Wizard detecting affect and performing speech recognition and language understanding. To investigate the third hypothesis, a second controlled experiment will be conducted, which replaces the WOZ system versions with fully-automated systems.
The major intellectual contribution of this research will be to demonstrate whether significant performance gains can be achieved in both partially and fully-automated affective spoken dialogue tutoring systems 1) by adapting to user uncertainty based on user aptitude levels, and 2) by adapting to multiple user states hypothesized to be of primary importance within the tutoring domain, namely uncertainty and disengagement. The research project will thus advance the state of the art in both spoken dialogue and computer tutoring technologies, while at the same time demonstrating any differing effects of affect-adaptive systems under ideal versus realistic conditions. More broadly, the research and resulting technology will lead to more natural and effective spoken dialogue-based systems, both for tutoring as well as for more traditional information-seeking domains. In addition, improving the performance of computer tutors will expand their usefulness and thus have substantial benefits for education and society.
This NSF grant is in collaboration with Kate Forbes-Riley.
Peer Review Search & Analytics in MOOCs via Natural Language Processing
(February 2014)
Peer assessments provided by students are widely used in
massively open-access online courses (MOOCs) due to the difficulty of
fully-automating assessment for many types of assignments. However,
the
use of student assessment provides an overwhelming amount of textual
information for instructors to process. The proposed research will
develop Natural Language Processing methods to support search and
large
scale analytics of student assessment comments in MOOCs.
My liason for this Google Faculty Research Award is
Daniel Russell.
Response-to-Text Prompts to Assess Students' Writing Ability: Using Natural Language Processing for Scoring Writing at Scale
(July 2013 - June 2015)
Assessing analytic writing in response to text (RTA) is a means for
understanding students’ analytic writing
ability and understanding
measures of effective teaching.
Current writing assessments typically examine “content-free” writing (i.e.,
writing in response to open-ended prompts divorced from text), although
prior work demonstrates that it is also possible to administer and rate student
writing in response-to-text.
However, there is a significant barrier to scoring writing at scale,
as scoring is labor intensive and requires extensive
training and expertise on the part of raters to obtain reliable
scores.
Recent advances in artificial intelligence
offer a promising way forward for scoring students’ analytic writing
at scale. Natural language processing (NLP) experts have been working
for decades on producing ways to reliably score student writing
holistically.
The state-of-the-art of
automated essay systems (AES) indicates that AES can produce
scores as reliable as human ratings in the sense that they can be
trained to score similarly to humans on holistic measures of writing,
especially for short, timed student responses.
To move the field forward, however, there is a need for writing
assessments that are aligned with authentic writing tasks.
Second, there is a need to explore whether AES algorithms can
reliably score across multiple dimensions of student
writing. Our assessement includes five dimensions (analysis,
evidence, organization, style/vocabulary and
mechanics/usage/grammar/syntax) and it will be important to see if AES
designs can rate substantive dimensions such as analysis and evidence
as well as they can rate more surface and structural dimensions of
writing.
This LRDC internal grant is in collaboration with
Rip Correnti and
Lindsay Clare Matsumura.
Intelligent Scaffolding for Peer Reviews of Writing
(project homepage)
(Sep. 2012 - Aug. 2015)
This study will adapt and apply existing Artificial Intelligence
techniques from Natural Language Processing and Machine Learning to
automatically scaffold the peer reviewing and
revising-from-peer-review process. Utilizing an iterative development
plan more complex and refined versions of the system will be used with
heavy testing in October and March each year. Researchers will
undertake three different but partially integrated interventions:
automatic detection of effective review comment features, automatic
detection of thesis statements and related comments, and facilitating
author revision by organizing review comments and author response
planning. The pilot experiment will take place in a high school
setting the last 6 months of the grant.
Iterative development will be conducted in four classroom environments:
high school science, high school English/social studies, a
university physics lab, and a university psychology class. The
comparison group will include students using the same web-based
peer review system as the treatment group, but with all the
intelligent scaffolding interventions disabled.
This IES grant is in collaboration with
Kevin Ashley,
Amanda Godley and
Christian Schunn.
Improving Undergraduate STEM Education by Integrating Natural Language Processing with Mobile Technologies
(project homepage)
(July 2014 - June 2016)
The degree and quality of interaction between students and instructors
are critical factors for students' engagement, retention, and learning
outcomes across domains. This is especially true for the introductory
STEM courses at the undergraduate level since these courses are
generally taught in lecture halls due to a large number of students
enrolled. Recent developments in educational technology such as MOOCs
and financial troubles in universities make it safe to predict that
the class size problem will only get worse both in traditional
face-to-face and online classes. So, how can we modify the passive
nature of lectures and increase the interaction while actively
involving both students and instructors in the learning process in
these circumstances?
In order to address this problem, we propose integrating Natural
Language Processing (NLP) with a mobile application that prompts
students to reflect as well as provide immediate and continuous
feedback to instructors about the difficulties that their students
encounter. By enhancing the student reflection and instructor
feedback cycle with technological tools, this project will
incorporate three lines of research: 1) role of students' reflection
and instructor's feedback on students' retention and learning
outcomes, 2) effectiveness and reliability of NLP to summarize written
responses in a meaningful way, and 3) value and design of mobile
technologies to improve retention and learning in STEM domains. This
LRDC internal grant is in collaboration with Muhsin Menekse and Jingtao Wang.
Computational Models of Essay Rewritings
(Revision Writing Assistant Demo)
(September 2015 - August 2017)
This project evaluates the viability of revision as a pedagogical
technique by determining whether student interactions with an NLP-based
revision assistant enables them to learn to write better -- that is, whether certain forms of the
feedback (in terms of the perceived
purposes and scopes of changes)
encourage students to learn to make more effective revisions. More
specifically, the project works toward three objectives:
(1) Define a schema for characterizing the types of changes that occur
at different levels of the rewriting. For example, the writer might
add one or more sentences to provide evidence to support a thesis; or
the writer might add just one or two words to make a phrase more
precise.
(2) Based on the schema, design a computational model for recognizing
the purpose and scope of each change within a revision. One
application of such a model is a revision assistant that serves as a
sounding board for students as they experiment with different revision
alternatives.
(3) Conduct experiments to study the interactions between students and
the revision writing environment in which variations of idealized
computational models are simulated. The findings of the experiments
pave the way for developing better technologies to support for student
learning. This NSF grant is in collaboration with
Rebecca Hwa.
Teaching Writing and Argumentation with AI-Supported Diagramming and Peer Review
(project homepage)
(Sep. 2011 - Aug. 2017)
The PIs are investigating the design of intelligent tutoring systems (ITSs) that are aimed at learning in unstructured domains. Such systems are not able to do as much automatically as ITSs working in traditionally narrow and well-structured domains, but rather they need to share responsibilities for scaffolding learning with a teacher and/or peers. In the work proposed, the three PIs, who share expertise in automated natural language understanding, intelligent tutoring systems, machine learning, argumentation (especially in law), complex problem solving, and engineering education, are integrating intelligent tutoring, data mining, machine learning, and language processing to design a socio-technical system (people and machines working together) that helps undergraduates and law students write better argumentative essays. The work of helping learners derive an argument is shared by the computer and peers, as is the work of helping peer reviewers review the writing of others and the work of learners to turn their argument diagrams into well-written documents. Research questions address the roles computers might take on in promoting writing and the technology that enables that, how to distribute scaffolding between an intelligent machine and human agents, how to promote better writing (especially the relationship between diagramming and writing), and how to promote learning through peer review of the writing of others.
This project is bringing together outstanding researchers from a variety of different disciplines -- artificial intelligence, law education, engineering and science education, and cognitive psychology -- to address an education issue of national concern -- writing, especially writing that makes and substantiates a point -- and to explore ways of extending intelligent tutoring systems beyond fact-based domains. It fulfills all aims of the Cyberlearning program -- to imagine, design, and learn how to best design and use the next generation of learning technologies, to address learning issues of national importance, and to contribute to understanding of how people learn.
This NSF grant is in collaboration with
Kevin Ashley and Christian Schunn.
Using Natural Language Processing to Study the Role of Specificity and
Evidence Type in Text Based Classroom Discussions
(July 2016 - June 2018)
How do students learn through text-based discussions in
English Language Arts (ELA) classrooms? This study seeks to
examine the content of student talk during ELA discussions in
order to better understand how students develop their
understanding of texts and reasoning skills through discussion.
Our proposed study uses Natural Language Processing (NLP) to
analyze two important features of students’ discussions about
texts: specificity and type of evidence.
This LRDC internal grant is in collaboration with
Amanda Godley.
Entrainment and Task Success in Team Conversations
(project homepage)
(August 2014 - July 2019)
Teams, rather than individuals, are now the usual generators of scientific knowledge. How to optimize team interactions is a passionately pursued topic across several disciplines. This research hypothesizes that linguistic entrainment, or the convergence of linguistic properties of spoken conversation, may serve as a valid and relatively easy-to-collect measure that is predictive of team success. From the perspective of developing interventions for team innovation, organizations could unobtrusively measure team effectiveness using entrainment, and intervene with training to aid teams with low entrainment. Similar interventions would be useful for conversational agents that monitor and facilitate group interactions. The work could also support the development of browsers or data mining applications for corpora such as team meetings or classroom discussions.
To date, most studies of entrainment have focused on conversational dyads rather than the multi-party conversations typical of teams. The technical objective of this research is to develop, validate and evaluate new measures of linguistic entrainment tailored to multi-party conversations. In particular, the first research goal is to develop multi-party entrainment measures that are computable using language technologies, and that are both motivated and validated by the literature on teams. The second goal is to demonstrate the utility of these measures in being associated with team processes and predicting team success. The supporting activities include 1) collection of an experimentally-obtained corpus where teams collaborate on a task where they converse, and where a team process intervention manipulates likely entrainment, 2) development of a set of entrainment measures for multi-party dialogue, 3) use of standard psychological teamwork measures for convergent validity and random conversations for divergent validity, 4) exploration of how the team factors of gender composition and participation equality impact group entrainment, and 5) evaluation of the utility of measuring entrainment for predicting team and dialogue success.
This NSF grant is in collaboration with
Susannah Paletz.
Current and Past Sponsors:
- The National Science Foundation through a grant to
the Center for Interdisciplinary Research on
Constructive Learning Environments (CIRCLE) at the
University of Pittsburgh and Carnegie Mellon
University. (NSF Award Abstract #9720359)
- The National Science Foundation through a grant to Kurt A. VanLehn, Carolyn P. Rose, Diane J. Litman, Michelene Chi, and Pamela W. Jordan
(NSF Award Abstract #0325054) at the University of Pittsburgh
- The Office of Naval Research through a grant to Diane J. Litman
(Adding Spoken Language to a Text-Based Dialogue Tutor)
- The National Science Foundation through a grant to
Dan Roth, Diane Litman, James Pellegrino, Sandra
Rodriguez-Zas, and ChengXiang Zhai
(NSF
Award Abstract #0428472)
at the University of Illinois at Urbana-Champaign
(subcontract to the University of Pittsburgh)
- The Office of Naval Research through a grant to Sandra Katz and Diane Litman
(Cohesion in Tutorial Dialogue and its Impact on Learning)
- The Learning Research and Development Center, University of Pittsburgh, through a grant to Diane Litman, Christian Schunn, and Kevin Ashley (Improving Learning from Peer Review with NLP and ITS Techniques)
- United States Department of Education Institute of Education
Sciences through a grant to S. Katz (PI), P. Jordan (Co-PI),
D. Litman (Co-PI) and M. Ford (Co-PI)
(Cognition
and Student Learning Award)
- The Learning Research and Development Center, University of Pittsburgh, through a grant to Kevin Ashley, Diane Litman, Chris Schunn, and Jingtao Wang (Keeping Instructors Well-Informed in Computer-Supported Peer Review)
- United States Department of Education Institute of Education
Sciences through a grant to Diane Litman (PI), Kevin Ashley, Amanda
Godley and Christian Schunn (Co-PIs) (Educational Technology Award)
- The Learning Research and Development Center, University of Pittsburgh, through a grant to Richard Correnti, Diane Litman, and Lindsay Clare Matsumara (Response-to-Text Prompts to Assess students' Writing Ability: Using Natural Language Processing for Scoring Writing At-Scale)
- The Learning Research and Development Center, University of
Pittsburgh, through a grant to Muhsin Menekse, Diane Litman, and Jingtao Wang
(Improving Undergraduate STEM Education by Integrating Natural Language Processing with Mobile Technologies)
- The Learning Research and Development Center, University of
Pittsburgh, through a grant to Amanda Godley and Diane Litman
(Using natural language processing to study the role of specificity
and evidence type in text based classroom discussions)
- United States Department of Education Institute of Education
Sciences through a grant to Diane Litman (PI), Richard Correnti, Lindsey Clare Matsumara (Co-PIs) (Educational Technology Award)
- The Learning Research and Development Center, University of
Pittsburgh, through a grant to Erin Walker, Diane Litman, and Tim Nokes-Malach
(Studying Collaborative Dialogue with a Teachable Robot in a Mathematics Domain)
|