Could we develop trustworthy “Belonging Analytics” from AI analysis of students’ reflective writing?
There’s growing interest in the use of large language models (LLMs) like GPT, Gemini and Claude to assist with qualitative data analysis. In this work, we develop and evaluate a methodology to prompt GPT to perform theory-based deductive coding of student reflective writing.
CIC PhD candidate Sriram Ramanathan is focused on what learning analytics can tell us about university students’ sense of belonging, part of a broader program on “Belonging Analytics“. Supervised by Lisa Lim and Simon Buckingham Shum, he teamed up with Nazanin Rezazadeh Mottaghi (UniSA) to write this full paper, presented at LAK25. Watch Ram’s presentation, and check out the paper for details!
Ramanathan, S., Lim, L.-A., Mottaghi, N.R. and Buckingham Shum, S. When the Prompt Becomes the Codebook: Grounded Prompt Engineering (GROPROE) and its Application to Belonging Analytics. In Proceedings of the 15th Int. Conf. Learning Analytics & Knowledge (Dublin, 2025). ACM. https://doi.org/10.1145/3706468.3706564
Abstract: With the emergence of generative AI, the field of Learning Analytics (LA) has increasingly embraced the use of Large Language Models (LLMs) to automate qualitative analysis. Deductive analysis requires theoretical bases to inform coding. However, few studies detail the process of translating the literature into a codebook and then into an effective LLM prompt. In this paper, we introduce Grounded Prompt Engineering (GROPROE) as a systematic process to develop a theory-grounded prompt for deductive analysis. We demonstrate our GROPROE process on a dataset of 860 students’ written reflections, coding for affective engagement and sense of belonging. To evaluate the quality of the coding we demonstrate substantial human/LLM Inter-Annotator Reliability. A subset of the data was inputted 60 times to measure the consistency with which each code was applied using the LLM Quotient. We discuss the dynamics of human-AI interaction when following GROPROE, foregrounding how the prompt took over as the iteratively revised codebook, and how the LLM provoked codebook revision. The contributions to the LA field are threefold: (i) GROPROE as a systematic prompt-design process for deductive coding, (ii) a detailed worked example showing its application to Belonging Analytics, and (iii) implications for human-AI interaction in automated deductive analysis.