Home / Event / First International Symposium on Educating for Collective Intelligence

First International Symposium on Educating for Collective Intelligence

Dec
6
Date: Friday, 6th December 2024
Time: 07:00 AM
Location: Online

Collective Intelligence

Intersecting, urgent challenges are precipitating large-scale crises, ecological, democratic, military, health and educational, to name just a few. This complexity is overwhelming our sensemaking capacity, provoking deep reflection across government, business, civic society and the academy. Fields as diverse and intersecting as organisation science, cognitive science, computer science and neuroscience are converging on the importance of Collective Intelligence (CI), ranging in scale from small teams to companies, to global networks (Malone & Bernstein, 2015).

In the editorial to the inaugural edition of CI journal, Flack et al. (2022) introduce CI as follows:

“We can find collective intelligence in any system in which entities collectively, but not necessarily cooperatively, act in ways that seem intelligent. Often—but not always—the group’s intelligence is greater than the intelligence of individual entities in the collective.” They further suggest that we have recently witnessed “two epic collective intelligence failures: the responses to COVID and climate change”. 

Hybrid Collective Intelligence

In our digitally enabled, connected world, it is natural that CI constitutes more than the collective ability of human minds. Computational platforms make new forms of discourse and coordination possible (De Liddo et al., 2012; Iandoli et al., 2016; Suran et al., 2020; van Gelder et al., 2020; Gupta et al., 2023). Artificial Intelligence (AI) adds machine actors to the network, with human-agent teaming research clarifying the conditions under which professionals come to trust AI agents as members of the team (O’Neill et al., 2022; Seeber et al., 2020). The explosive arrival of large language models combined with conversational user interfaces is the most recent technical advance, opening new possibilities for human-computer creativity and sensemaking (Rick et al., 2023). Such developments in our sociotechnical knowledge infrastructure lead Gupta et al. (2023) to ask:

“How do we know that such a sociotechnical system as a whole, consisting of a complex web of hundreds of human–machine interactions, is exhibiting CI?” and argue for sociotechnical architectures capable of sustaining collective memory, attention, and reasoning”.

The challenge for Higher Education

It seems uncontroversial to argue that citizens should be equipped for this new world, but to date, it is unclear what it means to educate for CI (Hogan et al., 2023). Schuler (2014) argues for an explicitly moral dimension to CI, and proposes a set of capacities underpinning “civic intelligence”. These strands of work are concerned not only with educating people about CI, but cultivating their ability to engage in CI practices, and increase CI capacity.

We are therefore convening this symposium to develop the conversation, to forge foundational concepts, and build the network needed to advance this agenda across diverse boundaries. We invite submissions addressing questions such as the following (non-exhaustive) deeply interwoven perspectives:

Theoretical

  • How  does educating for CI fit into other emerging conceptions of the future of education?
  • How should we conceive collective learning in relation to CI?
  • Are there key dimensions, and tradeoffs, when educating for CI?
  • How does the way we conceptualise technologies, intelligence and collectivity align with how we conceptualise CI?

Pedagogical

  • What knowledge, skills and dispositions do graduates need for CI?
  • What does a developmental trajectory in CI learning look like?
  • What pedagogies cultivate CI?
  • How do we assess CI?
  • How can CI counterbalance the individual focus of education, particularly assessment?
  • How can conversational AI support the development of critical thinking and collaboration skills in CI?
  • In what ways can conversational AI act as a facilitator in educational settings to enhance CI and participatory learning?

Technological

  • What is the design space of educational technologies for CI? (and how does this differ from CI technology for professionals?)
  • What is the design rationale for a given platform to support educating for CI?
  • How are such platforms used by students?
  • What roles can AI play in enhancing CI education?

Ethical

  • What are the ethical implications of using AI-enabled CI, and how do students learn ethical practices?

Program

Welcome and Overview

Simon Buckingham Shum (University of Technology Sydney)

Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, which he joined in 2014 as inaugural Director of the Connected Intelligence Centre. CIC is a transdisciplinary innovation centre inventing, piloting, evaluating and scaling data-driven personalised feedback to students, using human-centred design principles. Prior to this he was a founding member of the UK Open University’s Knowledge Media Institute (1995-2014). Simon’s career-long fascination with software’s ability to make thinking visible has seen him active academically in fields including Hypertext, Design Rationale, Open Scholarly Publishing, Computational Argumentation, Computer-Supported Cooperative Work, Educational Technology and Learning Analytics/AIED. Simon’s background in Psychology, Ergonomics and Human-Computer Interaction draws his attention to the myriad human factors that determine the effective adoption of new tools for thought, and the kinds of futures they might create at scale.

Keynote: Education for Collective Intelligence

Mike Hogan (University of Galway), Rupert Wegerif & Imogen Casebourne (University of Cambridge)

Hogan, M. J., Barton, A., Twiner, A., James, C., Ahmed, F., Casebourne, I., Steed, I., Hamilton, P., Shi, S., Zhao, Y., Harney, O. M., & Wegerif, R. (2023). Education for Collective Intelligence. Irish Educational Studies, 1-30. 

Collective Intelligence (CI) is important for groups that seek to address shared problems. CI in human groups can be mediated by educational technologies. The current paper presents a framework to support design thinking in relation to CI educational technologies. Our framework is grounded in an organismic-contextualist developmental perspective that orients enquiry to the design of increasingly complex and integrated CI systems that support coordinated group problem solving behaviour. We focus on pedagogies and infrastructure and we argue that project-based learning provides a sound basis for CI education, allowing for different forms of CI behaviour to be integrated, including swarm behaviour, stigmergy, and collaborative behaviour. We highlight CI technologies already being used in educational environments while also pointing to opportunities and needs for further creative designs to support the development of CI capabilities across the lifespan. We argue that CI education grounded in dialogue and the application of CI methods across a range of project-based learning challenges can provide a common bridge for diverse transitions into public and private sector jobs and a shared learning experience that supports cooperative public-private partnerships, which can further reinforce advanced human capabilities in system design.

Michael Hogan is a Senior Lecturer in Psychology at the National University of Ireland, Galway. He has worked on seven EU projects that have applied collective intelligence methods to address a range of societal challenges including marine ecosystem sustainability, open data and government transparency, malaria control and elimination, personalised nutrition service design, advancing literacy skills in children, improving public services with multidimensional statistical data, and healthy food ecosystem design. Michael was the first University of Cambridge DEFI STRIxResearcher in 2023, where he worked to further his research on collective intelligence and focused in particular on the challenge of Education for Collective Intelligence.  Over the past decade, Michael’s research has increasingly focused on basic and applied collective intelligence research and the creation of a new approach to systems science education, building on the work of John Warfield, past president of the International Society for the Systems Sciences. Michael continues to provide collective intelligence facilitation support to groups working to address complex issues across a variety of organizational and societal contexts. In Galway, he is currently leader of the Collective Intelligence Network Support Unit (CINSU) and a contributor to the Health and Well-being priority theme at the Whitaker Institute for Innovation and Societal Change. 

Rupert Wegerif is a professor in the Faculty of Education at the University of Cambridge and academic director of the Digital Education Futures Initiative at Hughes Hall, Cambridge. He is the author of several influential books and articles in the area of educational theory, educational psychology and education with technology. His most recent book, written with Louis Major ‘The Theory of Educational Technology: A Dialogic Foundation for Design’ (Routledge, 2024) focuses on the theory and practice of education with technology in the digital age, especially technology supported education for dialogue. His forthcoming book ‘Rethinking Educational Theory: Education as Expanding Dialogue’ (Elgar, 2025) outlines a new approach to education for the AI-enhanced Internet Age: education as expanding dialogic space.

Imogen Casebourne is the research lead at the Innovation Lab at DEFI (Digital Education Futures Initiative). She has a DPhil in Education (Oxon) and an MSc in Artificial Intelligence, for which she developed an AI program that wrote short stories (Sharples & Pérez y Pérez, 2023 p.8-10). She recently co-edited a book for Springer on AI and Education and co-authored a paper on AI and Collective Intelligence (forthcoming). She is currently a co-convener of the BERA special interest group dedicated to AI and human intelligence.

The Personhood Conferral Problem: AI Risk in the Domain of Collective Intelligence

Zak Stein (Consilience Project / Civilization Research Institute)

This position paper outlines: 1) a philosophical argument about the problem of conferring social statuses to Artificial Intelligences (“the personhood conferral problem”) and 2) the risks to human psychology, culture, and collective intelligence that follow from mishandling the personhood conferral problem in the design and application of AI systems. The conclusion is that steps must be taken to protect the emerging personhood and communicative capacities of younger generations of human beings, in order to enable their participation in the collective intelligence processes requisite for navigating what is fast becoming a perilous future. This will require clarifying, instituting, and adhering to strict design protocols, as well as age limits and other basic regulations on certain classes of technology.

Zachary Stein was trained at the interface of philosophy, psychology, and education, and now works in fields related to the mitigation of global catastrophic risk. A widely sought after and award winning speaker, Zak is a leading authority on the future of education and contemporary issues in human development. Some of his speaking invitations can be found on this website. Dr. Stein is the author of several books and many peer-reviewed papers, all of which can also be found here. 

Civic Intelligence is the Collective Intelligence We Need. But how do we get there?

Douglas Schuler (The Public Sphere Project and The Evergreen State College)

This paper is intended to help further the discussion of how to think and rethink about the concept of collective intelligence, how it can be used to support education for positive social change, and how we help organize ourselves and others in support of this work. In particular, it focuses on civic intelligence, a specialization of collective intelligence that has many important implications for this work. Ideally, it would play some role in advancing the critical needs and midcourse corrections that are the aims of this symposium. The paper presents several provocative assertions in relation to educating for collective intelligence, specifically as it potentially addresses significant real-world problems. Each shows a critical facet of the educating for collective intelligence endeavor. Some of them may seem to be to speculative or philosophical (or at least non-scientific) but conversations around these issues are critical and ongoing constituents of this endeavor — assuming that the goals of the symposium are serious (which I assume they are)— and they help further the workshop’s goal of forging foundational concepts.

Doug Schuler has focused on issues related to society and technology for nearly 40 years. He is a faculty emeritus at The Evergreen State College, a non-traditional liberal arts college and is the president of the Public Sphere Project, an educational non-profit corporation. In addition to teaching software development Doug conducted the Civic Intelligence Research and Action Laboratory (CIRAL) where student teams developed their own research and action projects. He has written many articles and books, including Participatory Design, New Community Networks, and Liberating Voices. He is former chair of Computer Professionals for Social Responsibility and of SIGCAS, the Special Interest Group on Computers and Society of ACM. Doug also co-founded the Seattle Community Network, a free, public access computer network over 30 years ago. Doug has masters degrees in computer science and software engineering. Although retired, Doug is still writing articles (most recently: Tools of Our Tools? Exploring the Cybercene Conjecture which looks at computing’s effects on the earth) and developing software for the use of patterns and pattern languages to support collaborative work on wicked problems. 

From Classroom to Global Stage: Harnessing Deliberation on Wicked Problems in Education

Aldo de Moor (CommunitySense)

This paper explores the potential of harnessing the “global classroom” of students worldwide to address wicked problems through deliberation at scale. It proposes leveraging the common ground of deliberation between education and professional collaboration to unlock students’ potential for collective action and societal impact. The paper highlights the need for carefully designed socio-technical infrastructures that balance powerful deliberation technologies with educational requirements and real-world contexts. To guide the design of such infrastructures, an existing evaluation framework for online deliberation tools is repurposed, adapting it to analyze collective intelligence infrastructures in educational-collective action settings. The framework’s four layers – usability, discussion quality, debate quality, and societal context – provide a comprehensive lens for identifying socio-technical opportunities and gaps. A hypothetical case study, “Regreening Your City,” illustrates how the framework could be applied to design global classroom initiatives addressing climate action. By systematically evaluating and designing collective intelligence infrastructures, the paper argues that we can better harness the potential of (AI-driven) online deliberation tools to augment human capabilities in tackling wicked problems. This approach aims to nurture complex collaborative problem-solving skills while contributing to real-world solutions, bridging individual learning with collective impact.

Aldo de Moor is the owner of CommunitySense, a research consultancy company founded in 2007. He earned his PhD in Information Management from Tilburg University, Netherlands, in 1999. From 1999 to 2004, Dr. de Moor served as an assistant professor at the Department of Information Systems and Management at Tilburg University. He then worked as a senior researcher at the Semantics Technology and Applications Research Laboratory (STARLab) of the Vrije Universiteit Brussel, Belgium, from 2005 to 2006. Aldo’s work bridges research and practice in Community Informatics, focusing on participatory collaboration ecosystems mapping and sensemaking, collaborative communities, and socio-technical systems design. His further research interests include communicative workflow modeling, argumentation support technologies, and pattern languages. Through CommunitySense, he translates state-of-the-art insights into working communities for clients, aiming to advance the rapidly evolving field of Community Informatics. 

The collective ochlotecture of large language models

Jon Dron (Athabasca University)

“Collective intelligence” (CI) is used across many disciplines to describe diverse phenomena. In my own work I have, for example, used the term to denote emergent stigmergic, flocking, and similar behaviours that allow crowds to be treated as entities in their own right; the ways in which group processes, structures, human propensities, and individual actions can lead to more or less successful achievement of intentional learning goals; and how we become parts of one another’s cognition through the technologies in which we participate and that participate in us. These examples only touch on CI’s broad usage to denote everything from individual brain organization, to the organization of groups and networks, to 4E cognition, to the concept of a global brain. Such varied uses describe different kinds of cognition, and so different kinds of responses are needed from our educational systems to them. This paper introduces the term “ochlotecture”, from the Classical Greek ὄχλος (ochlos), meaning  “multitude” and τέκτων (tektōn) meaning “builder” to describe the structures and processes that connect groupings of people. I use this concept to describe the ochlotecture of various kinds of CI, before discussing the distinctive ochlotecture of generative AIs and their potential impact on human learning. 

Jon Dron is a Professor and Associate Dean, Learning & Assessment in the Faculty of Science and Technology at Athabasca University, and a member of the Technology Enhanced Knowledge Research Institute. Jon has received both national and local awards for his teaching, is author of various award-winning research papers and is a regular keynote speaker at international conferences in fields as diverse as education, learning technologies, information science and programming. Jon has a first degree in philosophy, a masters degree in information systems, a post-graduate certificate in higher education and a PhD in learning technologies. Apart from his work in education, he has had careers in technology management, programming, and marketing, as well as over ten years as a professional singer. He is the author of Teaching Crowds: Learning and Social Media (2014, with Terry Anderson), and Control & Constraint in E-Learning: Choosing When to Choose (2007). His latest book, published in 2023, is How Education Works: Teaching, Technology, & Technique. He lives in beautiful Vancouver where, when he is not spending time with his wife, children and grandchildren, he sails, cycles, writes, sings, and plays many musical instruments, mostly quite badly. 

Collective intelligence on the job: Insights from the workplace learning literature

Margaret Bearman (Deakin University, AUS)

A workplace can be understood as a site of focussed collective intelligence. After all, in most workplaces, collaborative practices are necessary to ‘get the job’ done, with a collective understanding as to what the ‘job’ is. While there is little research into educating for collective intelligence, there is a longstanding and extensive literature on workplace learning. Lave and Wenger’s (1991) often misunderstood ‘communities of practice’ is probably the best known theoretical framing but there are many other significant frames, including ‘relational interdependence’ (Billett 2009) or ‘being stirred in to practices’ (Kemmis et al. 2017) or even the more psychologically oriented ‘teamwork’ literature (Salas et al, 2024). Insights from the workplace learning literature suggests conceptualisations, challenges and contradictions when considering how to educate for collective intelligence.

Billett, S. (2006). Relational Interdependence Between Social and Individual Agency in Work and Working Life. Mind, Culture, and Activity, 13(1), 53–69. https://doi.org/10.1207/s15327884mca1301_5

Kemmis, S., Edwards-Groves, C., Lloyd, A., Grootenboer, P., Hardy, I. & Wilkinson, J. (2017) Learning as being ‘stirred in’ to practices. In P. Grootenboer, C. Edwards-Groves & S. Choy (Eds.) Practice Theory Perspectives on Pedagogy and Education: Praxis, diversity and contestation. Singapore: Springer

Lave, J. & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press.

Salas, E., Linhardt, R., & Fernández Castillo, G. (2024). The Science (and Practice) of Teamwork: A Commentary on Forty Years of Progress….  Small Group Research, 10464964241274119.

Margaret Bearman is a Research Professor within the Centre for Research in Assessment and Digital Learning (CRADLE), Deakin University. She is known for her conceptual and empirical studies of higher and health professional education. Current programs of research include learning to work in a world with artificial intelligence (AI) and feedback cultures in clinical education.

Discursive principles for human-technology collectives

Tim Fawns (Monash University)

To be human is to think collectively with other humans, agents, technologies, objects, and environments. Understanding AI technologies as things we think with, rather than things that think for us, or that dehumanise us, or that supercharge us, involves crossing three main conceptual divides: between individual and collective understandings of thinking and learning; between humans and technology; and between AI and other technologies. I propose that crossing these divides involves a discursive shift away from framing humans and technologies as separate entities, toward a holistic framing of collective, dynamic, entangled social, material, and digital activity. To support this shift, I offer a set of principles of collective thinking and learning, informed by literature from distributed cognition and sociomaterialism: distribution; complementarity; co-constitutive multiplicity; integration; and attuned diversity. I then discuss some implications and questions for education. I argue that viewing all technology as integral to collective thinking and learning helps us to be more precise in articulating the goals, values, risks and possibilities of integrating technology into education.

Tim Fawns is Associate Professor (Education Focused) at the Monash Education Academy, Monash University, Australia. Tim’s research interests are at the intersection of digital, professional (particularly medical and healthcare) and higher education, with a focus on relations between technology and education. Tim’s research covers a broad range of practices (including curriculum design, assessment, teaching practice, evaluation and more), emphasising complexity within online, blended and hybrid education. He has recently contributed to TEQSA’s Assessment reform for the age of artificial intelligence guidance document and played a leading role in a range of sector-wide events to help institutions respond to the opportunities and challenges of artificial intelligence. 

Educating for Collective Intelligence: An SCI Perspective

Mark Klein, Edward Seabright, Jose Segovia-Martin, Emile Servan-Schreiber, Abdoul Kafid Toko 

(School for Collective Intelligence (SCI), University Mohammed VI Polytechnic, Morocco)

This paper used a CI process to collect the thoughts of several members of the UM6P School for Collective Intelligence (SCI) on the challenge of educating for collective intelligence (CI). We discuss what makes teaching CI important, what makes it unique, as well how to teach CI well. CI is distinguished by being a young and extremely multi-disciplinary field, which suggests we should follow the lead of similar discipline, such as complex systems science, in educating people about it. This has impact, we believe, both on the content of what we teach (e.g. the importance of starting with a core set of shared concepts and vocabulary), as well as how we teach it (e.g. using CI techniques to teach CI, using a case study approach, learning by applying CI principles to real-world problems, and so on).

Mark Klein — I am a Principal Research Scientist at the Massachusetts Institute of Technology. My research mission is to develop technology that helps large numbers of people work together more effectively to solve difficult real-world challenges. It seems that many of our most critical collective decisions have results (e.g. in terms of climate, economic prosperity, and social stability) that none of us individually want, suggesting that our current collective decision-making processes are deeply flawed. I’d like to contribute to fixing that problem. My approach is inherently multidisciplinary, drawing from artificial intelligence, collective intelligence, data science, operations research, complex systems science, economics, management science, and human-computer interaction, amongst other fields. My background includes a PhD in Artificial Intelligence from the University of Illinois, as well as research and teaching positions at Hitachi, Boeing, Pennsylvania State University, the University of Zurich, the Nagoya Institute of Technology, the National Institute of Advanced Industrial Science and Technology (AIST) in Tokyo Japan, and the School of Collective Intelligence at UM6P in Morocco.

Chairs

Register

Free, open webinar!

Dec 5th 12-3pm PST =  8-11pm GMT
= Dec 6th 7-10am AEDT:
Online Symposium

References

De Liddo, A., Sándor, Á., & Buckingham Shum, S. (2012). Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study. Computer Supported Cooperative Work, 21(4-5), 417-448. https://doi.org/http://dx.doi.org/doi:10.1007/s10606-011-9155-x 

Flack, J., Ipeirotis, P., Malone, T. W., Mulgan, G., & Page, S. E. (2022). Editorial to the Inaugural Issue of Collective Intelligence. Collective Intelligence, 1(1), 1-3. https://doi.org/10.1177/26339137221114179 

Gupta, P., Nguyen, T. N., Gonzalez, C., & Woolley, A. W. (2023). Fostering Collective Intelligence in Human–AI Collaboration: Laying the Groundwork for COHUMAIN. Topics in Cognitive Science, (Online: 29 June 2023), 1-28. https://doi.org/10.1111/tops.12679

Hogan, M. J., Barton, A., Twiner, A., James, C., Ahmed, F., Casebourne, I., Steed, I., Hamilton, P., Shi, S., Zhao, Y., Harney, O. M., & Wegerif, R. (2023). Education for collective intelligence. Irish Educational Studies,  (Online: 5 Sept. 2023), 1-30. https://doi.org/10.1080/03323315.2023.2250309 

Iandoli, L., Quinto, I., De Liddo, A., & Buckingham Shum, S. (2016). On online collaboration and construction of shared knowledge: Assessing mediation capability in computer supported argument visualization tools. Journal of the Association for Information Science and Technology, 67(5), 1052-1067. https://doi.org/https://doi.org/10.1002/asi.23481

Klein, M. (2012). Enabling Large-Scale Deliberation Using Attention-Mediation Metrics. Computer-Supported Cooperative Work, 21(4–5), 449–473. https://doi.org/10.1007/s10606-012-9156-4

Malone, T. W., & Bernstein, M. S. (2015). Handbook of Collective Intelligence. The MIT Press. https://cci.mit.edu/cichapterlinks/

O’Neill, T., McNeese, N., Barron, A., & Schelble, B. (2022). Human–Autonomy Teaming: A Review and Analysis of the Empirical Literature. Human Factors, 64(5), 904-938. https://doi.org/10.1177/0018720820960865 

Rick, S. R., Giacomelli, G., Wen, H., Laubacher, R. J., Taubenslag, N., Heyman, J. L., Knicker, M. S., Jeddi, Y., Maier, H., Dwyer, S., Ragupathy, P., & Malone, T. W. (2023). Supermind Ideator: Exploring generative AI to support creative problem-solving. https://doi.org/10.48550/arXiv.2311.01937

Schuler, D. (2014). Pieces of Civic Intelligence: Towards a Capacities Framework. E-Learning and Digital Media, 11(5), 518-529. https://doi.org/10.2304/elea.2014.11.5.518

Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 103174. https://doi.org/https://doi.org/10.1016/j.im.2019.103174 

Suran, S., Pattanaik, V., & Draheim, D. (2020). Frameworks for Collective Intelligence: A Systematic Literature Review. ACM Computing Surveys, 53(1), Article 14. https://doi.org/10.1145/3368986

van Gelder, T., Kruger, A., Thomman, S., De Rozario, R., Silver, E., Saletta, M., Barnett, A., Sinnott, R. O., Jayaputera, G. T., & Burgman, M. (2020). Improving Analytic Reasoning via Crowdsourcing and Structured Analytic Techniques. Journal of Cognitive Engineering and Decision Making, 14(3), 195-217. https://doi.org/10.1177/1555343420926287

Top