Home / News / Ensuring automated feedback is pedagogically sound (DAFFI 2020)

Ensuring automated feedback is pedagogically sound (DAFFI 2020)

We all know that feedback is critical for learning — and we all appreciate how demanding this can be for educators to provide, to many students, in a consistently motivating, and detailed way. This is a challenge studied by educational researchers of feedback design and feedback literacy.  

Designing Automated Feedback for Impact brought some of the leading researchers in this field into a 2-day dialogue with researchers developing automated-feedback tools using Learning Analytics/AI.  The original concept was to bring the editors and authors from The Impact of Feedback in Higher Education: Improving Assessment Outcomes for Learners (Eds. Henderson, Ajjawi, Boud & Molloy) to the UTS Connected Intelligence Centre, to spend 2 days in a workshop. The pandemic shifted this online, but the goals remain the same and moving online enabled us to more easily bring in additional participants.

“This book asks how we might conceptualise, design for and evaluate the impact of feedback in higher education. Ultimately, the purpose of feedback is to improve what students can do: therefore, effective feedback must have impact. Students need to be actively engaged in seeking, sense-making and acting upon any information provided to them in order to develop and improve. Feedback can thus be understood as not just the giving of information, but as a complex process integral to teaching and learning in which both teachers and students have an important role to play. The editors challenge us to ask two fundamental questions: when does feedback make a difference, and how can we recognise that impact?”

We call for a deeper dialogue between researchers in the design of assessment and feedback in higher education, and researchers developing automated-feedback tools using Learning Analytics/AI. 

Perhaps we can learn from each other:

  • designers of automated feedback (for students or educators) are challenged on how they could build more robustly on principles of good feedback design;
  • researchers and educators working on feedback design are challenged as to whether automated feedback opens up new possibilities not taken into account in prior research;
  • potentially, new concepts may emerge that provide important language to clarify a changing design space for “feedback-rich environments” (as the book terms them);
  • new opportunities for learning analytics to tackle obstacles to the uptake of better feedback design practices;
  • identify topics for future events, and potential next steps.

Over two days, the book’s authors shared examples of this reconceptualisation of designing feedback for impact, and learning analytics researchers showed what is now possible with automated feedback. The extended dialogue was very rich, and we look forward to sharing the fruit from that as we reflect on how to take this forward.

Event detail available here

Program and resources below…

Tues 8 Sept 

Replay whole playlist

10.00 Coffee and croissants (BYOC!) 

10.15 Welcome and opening thoughts [slides]

Simon Buckingham Shum (UTS)

10.30 Identifying the Impact of Feedback Over Time and at Scale: Opportunities for Learning Analytics [slides]

Dragan Gaševic (Monash)

This talk explores how learning analytics can help educators design impactful feedback processes and support learners to identify the impact of feedback information, both across time and at scale. In doing so, it offers current examples of how learning analytics could guide policy and educational designs and be usefully employed to support learners to direct their own learning and study habits. This chapter also highlights how learning analytics can help individuals understand and optimise learning, and the environments in which the learning occurs.

  • 30mins: Progress and challenges [Key ref: Book Chapter 12]
  • 30mins: Questions and commentary / General discussion

11.40 Break

11.55 Automated Feedback on Collocated Teamwork & Classroom Proxemics [slides]

Roberto Martinez-Maldonado, Gloria Fernandez Nieto, Jurgen Schulte, Simon Buckingham Shum (UTS)

Our work with colleagues in Health focuses on how sensors and multimodal analytics enable automated feedback to nursing teams on embodied, collocated activity. Work with Science has used movement tracking to prototype automated feedback to educators on their use of teaching spaces.

12.55 Lunch 

2.00 Role of automated feedback in generating feedback-rich environment [slides]

Michael Henderson (Monash) and Rola Ajjawi (Deakin)

The challenges and opportunities of identifying, influencing and assessing feedback impact. 

  • 30mins: Overview – Feedback Research & Practice Challenges
    [Key ref: Book Chapters 2, 14 and 15] and Rola Ajjawi & David Boud (2018) Examining the nature and effects of feedback dialogue, Assessment & Evaluation in Higher Education, 43:7, 1106-1119, DOI: 10.1080/02602938.2018.1434128
  • 30mins: Questions and commentary / General discussion

3.00 Break

3.15 Redesigning feedback involves addressing the feedback literacy of students and staff [slides]

David Boud (Deakin) 

The challenge of building feedback literacy in students and staff

  • 30mins: Overview
    Key refs Book Chapter 4 and:

Carless, D. and Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback, Assessment and Evaluation in Higher Education, 43, 8, 1315-1325. DOI: 10.1080/02602938.2018.1463354

Molloy, E., Boud, D. and Henderson, M. (2020) Developing a learner-centred framework for feedback literacy, Assessment and Evaluation in Higher Education, 45, 4, 527-540. DOI: 10.1080/02602938.2019.1667955

  • 30mins: Questions and commentary / General discussion

4.15 Reflections on Day 1

4.30 Close

Wed 9 Sept (all times AEST)

10.15 Fresh Croissants & Reflections for those who want to join early 

10.30 Assessment and feedback design at scale [replay][slides]

Jaclyn Broadbent (Deakin)

This is a practice-based discussion of feedback design at scale in a context involving 1500 students. Discussion touches on improving understanding of standards, scaffolded assessment, high-quality audio feedback with feedforward aspects. This practice-based discussion will also mention the use of a tool known as Intelligent Agents which send automated feedback to students based on their digital activity as a way for staff and students to connect.

  • 30mins: Questions and commentary / General discussion

11.30 Break

11.45 Examining impact and sense-making of personalised feedback messages using OnTask [replay][slides]

Lisa Lim & Abelardo Pardo (UniSA)

An OLT consortium has designed and is now piloting a platform called OnTask which enables an educator to design personalised feedback messages for hundreds of students at a time, based on their digital activity. Evidence is now emerging regarding the student and educator experience of such tools, and how their effectiveness can be judged.

  • 30mins: Examining impact and sense-making of personalised feedback messages using OnTask 

Lim, L.-A., Gentili, S., Pardo, A., Kovanović, V., Whitelock-Wainwright, A., Gašević, D., & Dawson, S. (2019). What changes, and for whom? A study of the impact of learning analytics-based process feedback in a large course. Learning and Instruction. doi:10.1016/j.learninstruc.2019.04.003 

Lim, L.-A., Dawson, S., Gašević, D., Joksimović, S., Pardo, A., Fudge, A., & Gentili, S. (2020). Students’ perceptions of, and emotional responses to, personalised LA-based feedback: An exploratory study of four courses. Assessment & Evaluation in Higher Education. doi:10.1080/02602938.2020.1782831 

  • 30mins: Questions and commentary / General discussion

12.45 Lunch

2.00 Where have we got to?

Emerging themes, overlapping interests, next steps…

4.00 Close 


Further reading…

Feedback design

Core source: The Impact of Feedback in Higher Education: Improving Assessment Outcomes for Learners 

Automated feedback 

Automated feedback on writing 

Significant work has focused on how we co-design automated feedback on writing with educators and students, leading to the refinement and release of an (open source) web tool called AcaWriter (orientation website for staff and students). This uses natural language processing to identify ‘rhetorical moves’ that are hallmarks of different genres of academic writing, in order to generate formative feedback.

Orientation for staff and students: https://uts.edu.au/acawriter 

Knight, S., Shibani, A., Abel, S., Gibson, A., Ryan, P., Sutton, N., Wight, R., Lucas, C., Sándor, Á., Kitto, K., Liu, M., Mogarkar, R. & Buckingham Shum, S. (2020). AcaWriter: A learning analytics tool for formative feedback on academic writing. Journal of Writing Research, 12, (1), 141-186. (Published online 12 April 2020). DOI: https://doi.org/10.17239/jowr-2020.12.01.06 

Antonette Shibani, Simon Knight and Simon Buckingham Shum (2020). Educator perspectives on learning analytics in classroom practice. The Internet and Higher Education, Volume 46. Available online 20 February 2020. https://doi.org/10.1016/j.iheduc.2020.100730 

Automated feedback on online engagement (any platform)

We co-designed and are now piloting a platform called OnTask which enables an educator to design personalised feedback messages or portals for hundreds of students at a time. Other institutions are embedding this or similar platforms (like ECoach and SRES), and evidence is now emerging regarding the student and educator experience of such tools, and how their effectiveness can be judged.

Introductions to OnTask and EClass: see these workshop videos 

Lisa-Angelique Lim, Shane Dawson, Dragan Gašević, Srecko Joksimović, Abelardo Pardo, Anthea Fudge & Sheridan Gentili (2020) Students’ perceptions of, and emotional responses to, personalised learning analytics-based feedback: an exploratory study of four courses, Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2020.1782831

Hamideh Iraj, Anthea Fudge, Margaret Faulkner, Abelardo Pardo, and Vitomir Kovanović. 2020. Understanding students’ engagement with personalised feedback messages. In Proceedings of the Tenth International Conference on Learning Analytics & Knowledge (LAK ’20). Association for Computing Machinery, New York, NY, USA, 438–447. DOI: https://doi.org/10.1145/3375462.3375527 

Pardo, A., Bartimote, K., Buckingham Shum, S., Dawson, S., Gao, J., Gašević, D., Leichtweis, S., Liu, D., Martínez-Maldonado, R., Mirriahi, N., Moskal, A. C. M., Schulte, J., Siemens, G. and Vigentini, L. (2018). OnTask: Delivering Data-Informed, Personalized Learning Support Actions. Journal of Learning Analytics, 5(3), 235-249. doi:https://doi.org/10.18608/jla.2018.53.15 

Automated feedback on collocated activity 

We are working with colleagues in Health on how sensors and multimodal analytics enable automated feedback on embodied, collocated activity. Work with Science has used movement tracking to prototype automated feedback to educators on Classroom Proxemics — their use of teaching spaces.

Roberto Martinez-Maldonado, Vanessa Echeverria, Gloria Fernandez Nieto, and Simon Buckingham Shum. 2020. From Data to Insights: A Layered Storytelling Approach for Multimodal Learning Analytics. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–15. DOI: https://doi.org/10.1145/3313831.3376148 

Martinez-Maldonado, R., Mangaroska, K., Schulte, J., Elliott, D., Axisa, C. and Buckingham Shum, S. (2020). Teacher Tracking with Integrity: What Indoor Positioning Can Tell About Instructional Proxemics. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (UBICOMP): to appear (Accepted Jan. 2020).

Critical, human-centred, design of learning analytics

Broader perspectives that could help illuminate how we design Analytics/AI-augmented “feedback rich environments”. 

Buckingham Shum, S.J. and Luckin, R. (2019), Learning analytics and AI: Politics, pedagogy and practices. British Journal of Educational Technology, 50, (6), pp.2785-2793. http://dx.doi.org/10.1111/bjet.12880

Buckingham Shum, S., Ferguson, R., & Martinez-Maldonado, R. (2019). Human-Centred Learning Analytics. Journal of Learning Analytics, 6(2), 1–9. https://doi.org/10.18608/jla.2019.62.1 

Kitto, K., Buckingham Shum, S., & Gibson, A. (2018). Embracing imperfection in learning analytics. Proceedings of the 8th International Conference on Learning Analytics and Knowledge. Association for Computing Machinery, New York, NY, USA, pp.451–460. DOI: https://doi.org/10.1145/3170358.3170413 

Top