AI for Assessment
|Event Date:||January 29, 2018|
|Authors:||Dr. Kerrie Douglas|
Artificial intelligence (AI) as an assessment tool, can be applied to learn how an instructor grades a problem. Based on a sample set of students’ responses, the instructor evaluates it and the system creates a computer model incorporating rules it has inferred about the instructor’s grading decisions.
The model is then used to grade other students' work. Increasingly, AI is being used to assess students' performance in areas ranging from mathematical problem-solving, to programming in computer science and even essays. "Ethical considerations are however, essential before wide-spread adoption of the recent advances in AI assessment," says Kerrie Anna Douglas, assistant professor of engineering education, Purdue University. She explains,"Essentially AI covers a broad collection of techniques devoted to making machines capable of making decisions in novel situations. The recent advances in AI assessment revolve around very specific tasks, however, the consequences of those advances to individual learners is unknown."
Commenting on how the system works, Douglas says, “AI systems are trained over data which can code societal biases or be overly influenced by dominant groups. As a result, the trained system would then replicate those biases when it makes decisions. It is one thing if an AI-based game does not quite assess every learner’s skills accurately, and it is something altogether different if a person’s scores in a college class are in part based on computational models that could some way be biased. In order for AI assessment scores to be ethically used to make decisions that are of personal consequence, ethics guiding traditional modes of educational assessment need to be updated for AI environments.”
Speaking about the use of AI in several aspects of learner evaluation, ranging from assessment that is used as part of the learning environment all the way to college admission testing done with automated scoring, Douglas adds, “Robots designed to aid in teaching specific skills for classroom use, such as Ozobot (ozobot.com), and even fun games that have an education component, such as Minecraft (where the player can dig and create 3D blocks within a world of varying terrains and habitats to explore), all at the fundamental level, are taking information given from learners, making an assessment and then responding. Sketchivity (sketchivity.com) teaches learners how to draw through assessment and feedback of drawings made with a stylus (a pointed tool for writing or drawing or engraving). In large online classes, AI-based models are working towards identification of learners who might be struggling with a particular content or are at-risk of underperforming.”
Douglas informs that research is currently being conducted for AI technologies to enable taking everything we know about a learner in a large online learning environment, such as the questions posed in group discussions, performance on homework, time spent on content, etc., and being able to be instructionally responsive. “The AI-based solution would determine holistically what a learner needs and provide that content in the moment of need,” she says.
But then, the possibilities for educational innovation based on AI assessment have just begun, observes Douglas, adding, “Imagine when an AI solution would enable a learner to be assessed for their current content knowledge and skills, their career interests, along with forecasts of the labour market needs and then be provided with a customized learning plan that would lead to the opportunity for efficiently learning skills that would translate to marketable employment skills. AI solutions could transform higher-education as we know it.”
However, Douglas raises concern when she says, “The question remains, apart from the statistical models of validation in AI, how do we know how to responsibly use those findings? What evidence is there that learners in a minority are being assessed fairly? If a learner’s response is highly creative, will the machine actually recognize it as such or score it poorly because it didn’t look like the usual ‘good’ work? Statistics can give you a result, but they cannot tell you how to make meaning of it. While the field of educational assessment has for years been debating and framing the ethical use of assessments, these frameworks and heuristics (methods) have not routinely been applied to AI.”
Douglas therefore calls for the need to articulate such frameworks to ensure that when the machine makes the assessment about what a learner knows and can do, it results in ethical decisions and justifiable consequences.