71 research outputs found
Aligning Tutor Discourse Supporting Rigorous Thinking with Tutee Content Mastery for Predicting Math Achievement
This work investigates how tutoring discourse interacts with students'
proximal knowledge to explain and predict students' learning outcomes. Our work
is conducted in the context of high-dosage human tutoring where 9th-grade
students (N= 1080) attended small group tutorials and individually practiced
problems on an Intelligent Tutoring System (ITS). We analyzed whether tutors'
talk moves and students' performance on the ITS predicted scores on math
learning assessments. We trained Random Forest Classifiers (RFCs) to
distinguish high and low assessment scores based on tutor talk moves, student's
ITS performance metrics, and their combination. A decision tree was extracted
from each RFC to yield an interpretable model. We found AUCs of 0.63 for talk
moves, 0.66 for ITS, and 0.77 for their combination, suggesting interactivity
among the two feature sources. Specifically, the best decision tree emerged
from combining the tutor talk moves that encouraged rigorous thinking and
students' ITS mastery. In essence, tutor talk that encouraged mathematical
reasoning predicted achievement for students who demonstrated high mastery on
the ITS, whereas tutors' revoicing of students' mathematical ideas and
contributions was predictive for students with low ITS mastery. Implications
for practice are discussed
Automated Gaze-Based Mind Wandering Detection during Computerized Learning in Classrooms
We investigate the use of commercial off-the-shelf (COTS) eye-trackers to automatically detect mind wandering—a phenomenon involving a shift in attention from task-related to task-unrelated thoughts—during computerized learning. Study 1 (N = 135 high-school students) tested the feasibility of COTS eye tracking while students learn biology with an intelligent tutoring system called GuruTutor in their classroom. We could successfully track eye gaze in 75% (both eyes tracked) and 95% (one eye tracked) of the cases for 85% of the sessions where gaze was successfully recorded. In Study 2, we used this data to build automated student-independent detectors of mind wandering, obtaining accuracies (mind wandering F1 = 0.59) substantially better than chance (F1 = 0.24). Study 3 investigated context-generalizability of mind wandering detectors, finding that models trained on data collected in a controlled laboratory more successfully generalized to the classroom than the reverse. Study 4 investigated gaze- and video- based mind wandering detection, finding that gaze-based detection was superior and multimodal detection yielded an improvement in limited circumstances. We tested live mind wandering detection on a new sample of 39 students in Study 5 and found that detection accuracy (mind wandering F1 = 0.40) was considerably above chance (F1 = 0.24), albeit lower than offline detection accuracy from Study 1 (F1 = 0.59), a finding attributable to handling of missing data. We discuss our next steps towards developing gaze-based attention-aware learning technologies to increase engagement and learning by combating mind wandering in classroom contexts
Can Computers Outperform Humans in Detecting User Zone-Outs? Implications for Intelligent Interfaces
The ability to identify whether a user is “zoning out” (mind wandering) from video has many HCI (e.g., distance learning, high-stakes vigilance tasks). However, it remains unknown how well humans can perform this task, how they compare to automatic computerized approaches, and how a fusion of the two might improve accuracy. We analyzed videos of users’ faces and upper bodies recorded 10s prior to self-reported mind wandering (i.e., ground truth) while they engaged in a computerized reading task. We found that a state-of-the-art machine learning model had comparable accuracy to aggregated judgments of nine untrained human observers (area under receiver operating characteristic curve [AUC] = .598 versus .589). A fusion of the two (AUC = .644) outperformed each, presumably because each focused on complementary cues. Furthermore, adding more humans beyond 3–4 observers yielded diminishing returns. We discuss implications of human–computer fusion as a means to improve accuracy in complex tasks.</jats:p
Affect Detection From Wearables in the “Real” Wild: Fact, Fantasy, or Somewhere In between?
Predicting Individual Action Switching in Passively Experienced and Continuous Interactive Tasks Using the Fluid Events Model
The Fluid Events Model is aimed at predicting changes in the actions people take on a moment-by-moment basis. In contrast with other research on action selection, this work does not investigate why some course of action was selected, but rather the likelihood of discontinuing the current course of action and selecting another in the near future. This is done using both task-based and experience-based factors. Prior work evaluated this model in the context of trial-by-trial, independent, interactive events, such as choosing how to copy a figure of a line drawing. In this paper, we extend this model to more covert event experiences, such as reading narratives, as well as to continuous interactive events, such as playing a video game. To this end, the model was applied to existing data sets of reading time and event segmentation for written and picture stories. It was also applied to existing data sets of performance in a strategy board game, an aerial combat game, and a first person shooter game in which a participant’s current state was dependent on prior events. The results revealed that the model predicted behavior changes well, taking into account both the theoretically defined structure of the described events, as well as a person’s prior experience. Thus, theories of event cognition can benefit from efforts that take into account not only how events in the world are structured, but also how people experience those events
- …
