AI could help students meet 'major challenges facing humanity' if we reframe education, say researchers
The paper cautions that AI could be a 'cognitive poison', limiting the progress of students, if the education system does not adapt.
Education must be reframed to integrate artificial intelligence in a way that can meet the "major challenges facing humanity", a conceptual research paper argues.
The University of Cambridge paper suggests that AI could be used to support students to learn and work collaboratively while drawing on different sources of knowledge.
It urges educators and policymakers to consider a move to "dialogic" learning, in which teachers and students talk more, explore problems together, and test ideas from different angles.
The paper's authors argue that "in order to integrate AI into education in a way that can meet the major challenges facing humanity, ranging from ecological crisis to the future of democratic societies, we must reframe education".
Outlining how this might work in practice, the paper reimagines a basic science lesson about gravity.
In a conventional lesson, students might be taught key principles, laws and formulae relating to gravity, which they are expected to memorise and reproduce later.
In the dialogic version, they begin with a question: "Why do objects fall to the ground?"
The paper imagines students discussing this in groups, then running their ideas past an AI chatbot that takes on the guise of different thinkers such as Aristotle, Isaac Newton, and Albert Einstein.
Approaches like this, the authors suggest, would have the advantage of placing students inside scholarly conversations relevant to the national curriculum, and help them to grasp key concepts by discussing and reasoning their way through them.
Co-author Rupert Wegerif, a professor of education at the University of Cambridge, said: "Every so often, a technology comes along that forces a rethink of how we teach.
"It happened with the internet, with blackboards, even with the development of writing. Now it's happening with AI.
"If ChatGPT can pass the exams we use to assess students, then at the very least we ought to be thinking deeply about what we are preparing them for.
"One thing we should consider is education as a more conversational, collaborative activity, an approach first advocated by Socrates, but also highly relevant to a digitally connected world with planet-sized problems," he added.
The paper cautions that AI could be a "cognitive poison", limiting the progress of students, if the education system does not adapt.
"If educational systems remain bound to the traditional print-based assumptions and assessment methods, GenAI (generative AI) is likely to appear as a cognitive poison," the paper argues.
"For example, students who feel under pressure to produce essays demonstrating their personal capacity to produce a critical synthesis of large amounts of knowledge may naturally rely heavily on GenAI, because it can do this task better than they can. In the process, they may diminish their personal creative and critical engagement and sense of agency."
Mr Wegerif said: "The way we teach and learn needs to change. AI can be part of the remedy, but only with approaches to learning and assessment that reward collaborative inquiry and collective reasoning.
"There is no point just teaching students to regurgitate knowledge. AI can already do that better than we can."
Co-author Imogen Casebourne, a researcher at Hughes Hall in Cambridge, said generative AI has arrived at a "time when there are many other pressures on educational systems".
"The question is whether it is adopted in ways that enable students to develop skills, such as dialogue and critical thinking, or ways that undermine this," she said.
The paper is published in the



