2019-04-01 13:00:00 2019-04-01 14:00:00 America/Indiana/Indianapolis PhD Seminar - Audrey Reinert "Detecting Human Machine Interaction Fingerprints in Continuous Event Data" GRIS 302

April 1, 2019

PhD Seminar - Audrey Reinert

Event Date: April 1, 2019
Hosted By: Dr. Steve Landry
Time: 1:00 - 2:00 PM
Location: GRIS 302
Contact Name: Cheryl Barnhart
Contact Phone: 4-5434
Contact Email: cbarnhar@purdue.edu
Open To: all
Priority: No
School or Program: Industrial Engineering
College Calendar: Show
“Detecting Human Machine Interaction Fingerprints in Continuous Event Data”

ABSTRACT

The human factors community does not know how to convert indirect measures of human performance data collected from live systems into direct measures of human performance data. Further, the community does not know how continuous system data can be used "adaptively" to predict user delay in responding to a system event. These gaps will be addressed using the following approach.

A web-application was constructed in which the workload required to accomplish a task could be manipulated. Participants performed the task under one of three workload conditions. Continuous state data about the task was recorded as the participant performed the task. The resulting data was used to train a classifier that attempted to determine if the user was experiencing low, medium or high difficulty condition. The collected data was used to train an additional model that predicted a participant's delay in responding to an on-screen malfunction. Finally, features of the data were examined to determine if changes to these features affected the accuracy of the models.

The results of this research project indicate that task difficulty - and by proxy subjective workload - can be determined using recorded and derived measures in system state data. The evidence further suggests that an individual's delay in responding to an on-screen event cannot be consistently predicted using regression model. There is evidence that the delay cluster a participant will belong to can be predicted at a rate above chance. 

There is weak evidence to suggest that the inclusion of measures of experience alters the confidence interval of a prediction or improves the accuracy of the predictions. Models built using only experience as an input fail to perform better than a model that was guessing at the correct delay group or difficulty level. This result was not unexpected as there is no reason why experience would be predictive difficulty. However, the inclusion of experience into a model can make a prediction statistically significant even while losing accuracy. 

Evidence suggests that altering the rate at which the data is sampled does impact a model's predictive accuracy. Using data collected at a slower sampling rate results in more accurate predictions of difficult and delay when using neural networks. Multinomial and Random Forest models will gain or lose predictive accuracy depending on how the model was tuned. There were too few statistically significant models that predicted a participant's delay cluster for a clear trend to be derived.