Home / News / Meaningful, impactful dialogue about the ethics of AI in university

Meaningful, impactful dialogue about the ethics of AI in university

Is there a practical way to engage the diversity of a university community in a meaningful, impactful dialogue about the ethical use of AI in teaching and learning? Yes there is, and here’s how we did it…

In Sept-Dec 2021, under pandemic lockdown, we ran five carefully facilitated, online workshops with a diverse group of students and staff. Their mission:

  • to learn about the emerging world of educational technologies powered by data, analytics, and now AI
  • to engage in critical thinking and respectful dialogue about the implications, with each other and the ‘expert witnesses’ briefing them
  • to draft a set of principles that should govern the use of such technologies at UTS

How we did this using the principles and methods of Deliberative Democracy, and what we learnt from the participants about their experience of this, is documented in this new paper from CIC’s Simon Buckingham Shum who led the project, co-authored with our University of Sydney colleagues Teresa Swist and Kal Gulson, who were observing the whole process and conducted the participant interviews. Dive into the project website to see the principles the “deliberative mini-public” arrived at, and watch them present this to UTS leaders.

As showcased by the Human Technology Institute’s Lighthouse Case Study #3, the success of this consultative project then helped catalyse the design of the UTS AI Operations Policy, governing the ethical deployment of AI technology in university business processes.

Swist, T., Buckingham Shum, S., & Gulson, K. N. (2024). Co-producing AIED Ethics Under Lockdown: An Empirical Study of Deliberative Democracy in Action. International Journal of Artificial Intelligence in Education, Online 27 Feb. 2024. https://doi.org/10.1007/s40593-023-00380-z

Abstract: It is widely documented that higher education institutional responses to the COVID-19 pandemic accelerated not only the adoption of educational technologies, but also associated socio-technical controversies. Critically, while these cloud-based platforms are capturing huge datasets, and generating new kinds of learning analytics, there are few strongly theorised, empirically validated processes for institutions to consult their communities about the ethics of this data-intensive, increasingly algorithmically-powered infrastructure. Conceptual and empirical contributions to this challenge are made in this paper, as we focus on the under-theorised and under-investigated phase required for ethics implementation, namely, joint agreement on ethical principles. We foreground the potential of ethical co-production through Deliberative Democracy (DD), which emerged in response to the crisis in confidence in how typical democratic systems engage citizens in decision making. This is tested empirically in the context of a university-wide DD consultation, conducted under pandemic lockdown conditions, co-producing a set of ethical principles to govern Analytics/AI-enabled Educational Technology (AAI-EdTech). Evaluation of this process takes the form of interviews conducted with students, educators, and leaders. Findings highlight that this methodology facilitated a unique and structured co-production process, enabling a range of higher education stakeholders to integrate their situated knowledge through dialogue. The DD process and product cultivated commitment and trust among the participants, informing a new university AI governance policy. The concluding discussion reflects on DD as an exemplar of ethical co-production, identifying new research avenues to advance this work. To our knowledge, this is the first application of DD for AI ethics, as is its use as an organisational sensemaking process in education.

Top