Home / News / CIC@LAK25!

CIC@LAK25!

Team CIC bringing it to Dublin… Ram Ramanathan, Ben Hicks & Yuveena Gopalan (Doctoral Researchers), Lisa Lim, Simon Buckingham Shum, Gloria Fernandez-Nieto & Antonette Shibani (PhD Alumni!), and Kirsty Kitto

The International Conference on Learning Analytics & Knowledge (LAK) is one of the leading forums to explore the entanglement of people and technology, including a growing stream of work on generative AI in education in recent years. Papers are peer reviewed double-blind, and the open access proceedings archived in the ACM Digital Library. We will be chairing workshops on emerging challenges, and presenting papers and interactive posters. Connect with us in Dublin and read our work linked below…

Workshops

Workshop: What are the Grand Challenges of Learning Analytics?

In line with the conference theme, this workshop will “expand the horizons” of Learning Analytics (LA) by bringing together researchers and practitioners from a wide variety of backgrounds to create a community-accepted list of grand challenges. It will work towards finding common elements in various existing research programs and mapping out the new research avenues that are deemed most interesting by the community. This will help the LA community to point to well established “blue skies” requiring more work when applying for funding and large grants. It will also support more junior researchers in seeing the bigger picture when plotting out their research trajectory.

Organizers: Kirsty Kitto, University of Technology Sydney, Oleksandra Poquet, Technical University of Munich, Catherine Manly, Fairleigh Dickinson University, Rebecca Ferguson, The Open University, UK

Writing analytics in the age of large language models: Shaping new possibilities for assessing and promoting writing

Generative Artificial Intelligence applications powered by large language models (LLMs) have significantly influenced education and, in particular, reimagined writing technologies. While LLMs offer huge potential to provide automated writing support to learners, it is also important to identify challenges they bring to learning, assessment, and critical interaction with AI. This workshop aims to shape possibilities for writing analytics to promote and assess learning-to-write and writing-to-learn that are appropriate for the generative AI era. In this seventh workshop of the Writing Analytics series, we propose a symposium-style format to identify how the field can unfold in the age of LLMs. In particular, we focus on (case) studies within two topics: (1) using writing analytics to design and evaluate interactive writing support systems and (2) using writing analytics to evaluate human-AI interactions and provide timely insights for students/educators. In addition, this workshop will serve as a community-building event to invigorate the SOLAR writing analytics community.

Organizers: Rianne Conijn, Eindhoven University of Technology, the Netherlands, Antonette Shibani, University of Technology Sydney, Australia, Laura Allen, University of Minnesota, USA, Simon Buckingham Shum, University of Technology Sydney, Australia, Cerstin Mahlow, ZHAW School of Applied Linguistics, Switzerland

New Horizons in Human-Centered Learning Analytics and AI in Education

This workshop will explore new horizons in Human-Centered Learning Analytics and Artificial Intelligence (AI) in education, focusing on research, design, and development practices that enhance educational systems. By aligning closely with pedagogical intentions, preferences, needs, and values, these systems aim to amplify and augment the abilities of all educational stakeholders. By examining alternative frameworks and addressing the broader implications of technology for humanity, this workshop aims to foster responsible, inclusive, value-sensitive, and sustainable data-powered solutions. This way, we strive for enhanced educational experiences while respecting the agency and well-being of educators and learners, as well as our social bonds and the environment.

Organizers: Riordan Alfredo, Monash University, Australia, Simon Buckingham Shum, University of Technology Sydney, Australia, Mutlu Cukurova, University College London, UK, Patricia Santos, Universitat Pompeu Fabra: Barcelona, Spain, Paraskevi Topali, Radboud University, Nijmegen, Netherlands, Olga Viberg, KTH Royal Institute of Technology, Sweden, Yannis Dimitriadis, Universidad de Valladolid, Spain, Charles Lang, Columbia University, USA, Roberto Martinez-Maldonado, Monash University, Australia

From Data to Discovery: LLMs for Qualitative Analysis in Education

This interactive workshop brings together researchers who have explored the use of large language models (LLMs) for the processing and analysis of qualitative data, in both the learning analytics community and other communities related to educational technologies. The workshop will feature presentations of research examples and demonstrations of these applications, providing insights into the methodologies and tools that have proven effective for automating qualitative analysis across research contexts. Additionally, the session will address challenges associated with the application of LLMs in education, such as data privacy, ethical considerations, and ways to build community and shared resources. Attendees will share their experiences and contribute to a collective understanding of best practices in the use of AI for qualitative research. Participants will engage in discussions and hands-on activities to understand the capabilities and limitations of LLMs in handling qualitative data. An output of the workshop will include a plan for developing a systematic review of progress in using LLMs for qualitative data analysis. Large Language Models, Qualitative Data Analysis, AI in Education, Natural Language Processing.

Organizers: Amanda Barany, University of Pennsylvania, Ryan S. Baker, University of Pennsylvania, Andrew Katz, Virginia Tech Engineering Education, Jionghao Lin, Carnegie Mellon University & Monash University

Where we will present two papers…

When the Prompt becomes the Codebook: Grounded Prompt Engineering (GROPROE) and its application to Belonging Analytics

(details under Full Papers below)

From Transcripts to Themes: A Trustworthy Workflow for Qualitative Analysis Using Large Language Models

Aneesha Bakharia, Antonette Shibani, Lisa-Angelique Lim, Trish McCluskey and Simon Buckingham Shum

We present a novel workflow that leverages Large Language Models (LLMs) to advance qualitative analysis within Learning Analytics, addressing the limitations of existing approaches that fall short in providing theme labels, hierarchical categorization, and supporting evidence, creating a gap in effective sensemaking of learner-generated data. Our approach uses LLMs for inductive analysis from open text, enabling the extraction and description of themes with supporting quotes and hierarchical categories. This trustworthy workflow allows for researcher review and input at every stage, ensuring traceability and verification, key requirements for qualitative analysis. Applied to a focus group dataset on student perspectives on generative AI in higher education, our method demonstrates that LLMs are able to effectively extract quotes and provide labeled interpretable themes compared to traditional topic modeling algorithms. Our proposed workflow provides comprehensive insights into learner behaviors and experiences and offers educators an additional lens to understand and categorize student-generated data according to deeper learning constructs, which can facilitate richer and more actionable insights for Learning Analytics.

Papers

What’s the Value of a Doctoral Consortium? Analysing a Decade of LAK DCs as a Community of Practice

Rebecca Ferguson, Yuveena Gopalan and Simon Buckingham Shum

Since 2013, the Learning Analytics and Knowledge (LAK) conference has included a Doctoral Consortium (DC). We frame the DC as a structured entry into the LAK community of practice (CoP). CoPs generate five types of value for their members: immediate, potential, applied, realised and reframing. This study used a survey of the 92 DC students from the first decade, supplemented with scientometric analysis of LAK publications, to address the questions: ‘What value do students gain from attending the LAK doctoral consortium?’ and ‘Do students gain the same value from face-to-face and virtual doctoral consortia?’ Reflexive thematic analysis of responses showed that students gained a wide range of immediate and potential value from the DC, which in many cases also prompted changes in practice, performance improvement or redefinition of success. However, the value reported by respondents who had attended virtually was more limited. This paper’s contributions are (i) the first systematic documentation of student perceptions of LAK DCs, (ii) identification of ways in which doctoral consortia can be developed in the future, and (iii) specific attention to how virtual DCs can offer greater value for both participants and the host community of practice.

Game Theoretic Models of Intangible Learning Data

Ben Hicks and Kirsty Kitto

Learning Analytics is full of situations where features essential to understanding the learning process cannot be measured. The cognitive processes of students, their decisions to cooperate or cheat on an assessment, their interactions with class environments can all be critical contextual features of an educational system that are impossible to measure. This leaves an empty space where essential data is missing from our analysis. This paper proposes the use of Game Theoretic models as a way to explore that empty space and potentially even to generate synthetic data for our models. Cooperating or free-riding on the provisioning of feedback in a class activity is used as a case study. We show how our initially simple model can gradually be built up to help understand potential educator responses as new situations arise, using the emergence of GenAI in the classroom as a case in point.

 

When the Prompt becomes the Codebook: Grounded Prompt Engineering (GROPROE) and its application to Belonging Analytics

Sriram Ramanathan, Lisa Lim, Nazanin Rezazadeh Mottaghi & Simon Buckingham Shum

With the emergence of generative AI, the field of Learning Analytics (LA) has increasingly embraced the use of Large Language Models (LLMs) to automate qualitative analysis. Deductive analysis requires theoretical bases to inform coding. However, few studies detail the process of translating the literature into a codebook and then into an effective LLM prompt. In this paper, we introduce Grounded Prompt Engineering (GROPROE) as a systematic process to develop a theory-grounded prompt for deductive analysis. We demonstrate our GROPROE process on a dataset of 860 students’ written reflections, coding for affective engagement and sense of belonging. To evaluate the quality of the coding we demonstrate substantial human/LLM Inter-Annotator Reliability. A subset of the data was inputted 60 times to measure the consistency with which each code was applied using the LLM Quotient. We discuss the dynamics of human-AI interaction when following GROPROE, foregrounding how the prompt took over as the iteratively revised codebook, and how the LLM provoked codebook revision. The contributions to the LA field are threefold: (i) GROPROE as a systematic prompt-design process for deductive coding, (ii) a detailed worked example showing its application to Belonging Analytics, and (iii) implications for human-AI interaction in automated deductive analysis.

Posters

The papers detailing these posters are published in: LAK25 Companion Proceedings.

Degrees of Belonging: Gaining insights into university students’ belonging through theory-informed learning analytics

Lisa-Angelique Lim & Simon Buckingham Shum

This poster reports ongoing work to leverage learning analytics for enhancing students’ sense of belonging in higher education. Despite the importance of belonging for student engagement, there is a significant research gap in how to monitor and support students’ belonging throughout their degree programs. Using an innovative, theory-informed learning analytics approach, we conduct a study to gather and analyze both quantitative and qualitative data on students’ belonging at scale via the SenseMaker® tool. This platform allows respondents to share narratives (referred to as ‘stories’) and then code these using signifiers grounded in theories of belonging. Currently conducted at an institution in [country blinded for review], we invited in-degree students across the university to share their stories of belonging (or alienation). The poster presents preliminary findings from the collected data and discusses possible interpretations and future directions, contributing to the emerging subfield of Belonging Analytics.

Towards Analytics for Self-regulated Human-AI Collaboration in Writing

Antonette Shibani, Vishal Raj and Simon Buckingham Shum

Learning analytics offers significant potential to examine learner process data in the age of generative AI. This study examines collaboration dynamics in human-AI co-writing systems using keystroke metrics and clustering. Our findings show three distinct types of behaviors when writers co-write with suggestions from a large language model: Balanced collaborators, AI-reliant writers, and Independent creators, as the first step towards uncovering nuanced dynamics of human-AI interaction. We posit that such analytics on writing behaviours can be used as proxies to provide learners feedback on their use and collaboration with AI in writing for self-regulation.

Doctoral Consortium: Participatory Causal Modelling of Learning Systems

Ben Hicks

Learning Analytics aims to improve the learning process. This necessitates a causal interpretation of observational data. One way to model causal structure is by using causal Directed Acyclic Graphs. The visual formalism of the model requires little technical knowledge to engage with, providing an opportunity for non-technical experts to remain engaged deep into the crafting of critical statistical assumptions about the learning system, including the importance of latent variables. My research will apply these models to several potential cases, including equitable learning outcomes and student support. The models will be co-constructed between stakeholders with a wide range of expertise, using the visual formalism to help foster a shared understanding. Key decisions I am considering concern the evaluation of how this process influences participants’ thinking about the system as well as how best to engage with the range of stakeholders required to facilitate the modelling.

The Doctoral Consortium is a full day event is for a small group of students selected to present their work for feedback. 

Top