Home / News / Can imperfect AI improve student learning?

Can imperfect AI improve student learning?

The narrative about Artificial Intelligence (AI) is often that it will improve society by helping to remove human error from complex systems beyond our ken. There is no doubt that in some cases this is true: self-driving cars will help to decrease traffic accidents; google translate has made it surprisingly easy to read texts from other languages; and AI driven image recognition already far surpasses humans when it comes to the diagnosis of cancer. All of these technologies rely upon the assumption that increasing the accuracy of the various algorithms used will result in better outcomes, a largely uncontroversial assumption for sciences that rely upon techniques from machine learning.

However, in the educational domain this largely unchallenged assumption might not be so straightforward, for a number of different reasons. Firstly, the agency that we are trying to generate in our students is unlikely to be helped by computational systems that can perfectly classify student behaviour. For example, consider how much we have all come to rely upon spell checkers, often to the detriment of our innate spelling ability – do we want students learning to rely upon university computational systems that make their lives inherently easier? What will happen to them when they  hit the ‘real world’ and loose access to these assistants? Universities increasingly seek to develop students with the capability to solve problems by thinking out of the box to critically analyse situations and provide new ways of thinking about established practices; such as scenarios where competing stakeholders with wildly varying interests must be satisfied, or where an employee is expected to appropriately call attention to company practices leading to ongoing inefficiencies or poor practice. These types of problems tend not to have single ‘correct’ solutions, and often need to be navigated in a highly creative and flexible manner. Finally, we live in a world where personal data is increasingly commodified to construct profiles of people that can be used to sell them products, or even to manipulate their attitudes and preferences towards decisions that they might not otherwise have made (as appears to have been done by Cambridge Analytica using data harvested from over 50 million Facebook users without their consent). Universities can play a profound role in helping our students learn to challenge computational tools that classify their behaviour, but this will only occur if we teach them that the algorithms behind these tools can sometimes be wrong.

A paper written by 3 CIC researchers was recently presented at LAK 2018, which explores these issues, calling for the field to embrace computational imperfection for the unique teachable moments that it provides. It argues that a class of computational tools can prove more effective at teaching students to critically examine the result of computational tools if they make mistakes. Rather than misguiding the students, the cognitive dissonance they provide can help students learn how to challenge the often black box of machine learning.

Top