skipToContent
United KingdomHE higher-ed

HE wrapped for 2025

LSE Higher Education Blog United Kingdom
HE wrapped for 2025
From AI to play, academics from around the world choose the higher education stories that stood out for them in 2025 The creative scar left by GenAI The most thought-provoking story for me in 2025 is the evidence of a creative scar left by generative AI, revealed in our study published this year. We found that while ChatGPT boosts short-term creative performance, this enhancement is a fragile illusion. Once AI support is removed, individual creativity drops, yet a striking homogeneity in thought and expression persists for months. It acts like a cognitive imprint – a scar that remains even after the tool is gone. This finding sharply reframes my view of technology in education. It’s no longer just about using AI, but about preserving authentic learning and independent thinking. If students rely on AI to generate ideas, are they truly developing their own creative and critical capacities or are they unwittingly adopting a standardised algorithmic mode of thought? For higher education, this calls for a pedagogical pivot: we must design learning that teaches students to think with, against, and beyond AI, not just through it. The goal is to foster creativity that endures even when the tool is switched off because that is where true education lies. – Guiquan Li , Associate Professor, Department of Managerial and Social Psychology, Peking University, China Preparing students for jobs that don’t exist yet This provocative conversation between edtech advocate, Sinead Bovell and AI economist, Professor Ajay Agrawal cuts straight to the heart of key issues we need to seriously consider and address. These include how generative AI is already reshaping the labour market and how we can expect our jobs to evolve. It also explores the impact of AI on new college graduates and how education institutions must adapt to prepare people for jobs that don’t look anything like the ones we know today. The wide-ranging discussions question knowledge and judgement, the formulation of the knowledge economy, the relationship between skills and jobs and preparing for new workflows collaborating with other intelligences. – Ari Seligmann, Associate Dean, Education, Faculty of Art Design and Architecture, Monash University, Australia Carbon-based intelligence versus silicon-based intelligence At the recent AI for Education conference in November 2025, Mairéad Pratschke, in her keynote, highlighted the importance of context in both teaching and learning and in the development of AI. This resonated with me as an engineering physics educator. A key factor distinguishing students who have attained mastery from those who have not, is the ability to apply procedural knowledge to the correct context. While identifying the importance of context seems to narrow the gap between the two intelligences, we recognise that many aspects of context in the continuum of the physical and social world cannot be easily or accurately encoded for AI training. I added this to my collection of examples distinguishing “carbon-based intelligence” from “silicon-based intelligence.” In his closing keynote at the same conference, David Hung spoke of explicit, implicit, and embodied (tacit) knowledge – the latter two being difficult or impossible to encode in AI. For us humans, the importance of solid foundational knowledge cannot be overstated, even before we discuss higher-order thinking. As we navigate this new world, inevitably, we will have to think more deeply about thinking and learn more about learning – our students are also having to do the same. – Shen Yong Ho, Executive Director, Institute for Pedagogical Innovation, Research and Excellence, Nanyang Technological University, Singapore UK HE policy: the writing is on the wall Three things stand out to me as massive HE stories this year. One is at the University of Dundee, where the Scottish government has (more or less) made more than £60 million available to the university following its slide into financial peril. The debate around this, the stories that have come out around management and governance, the interplay both with ongoing legislation and with the upcoming elections, and the wider implications even elsewhere in the UK have all been fascinating, even if it’s also an incredibly sad turn of events for all the staff caught up in it. The other two stories are both about UK policy as it pertains to HE. May’s immigration white paper was pretty seismic and while the levy and the graduate route get all the sector attention, I feel the longer-term implications for staff and students fall under the radar (see my WonkHE article about the repeated broken promises to international students). Equally important, though much more overlooked, was when the English skills and apprenticeship policy was moved from Department for Education to the Department for Work and Pensions. It’s a slow-burner, but I think long term the consequences around how the government thinks about education, skills, young people, and employers could all end up being rather significant . – Michael Salmon , News Editor, Wonkhe , UK The return of traditional exams: are we trading one problem for another? The decision by New Zealand universities to stop using AI-detection software is, in my view, a necessary and positive step. These AI detectors are unreliable, opaque, and risk unfairly penalising students, causing significant anxiety. In countries such as New Zealand and Australia, where public anxiety about AI remains high, moving away from technological policing reflects a more ethical and evidence-informed approach to academic integrity. However, what I am increasingly observing is a counter-reaction: a return to invigilated, closed-book, on-site, high-stakes examinations. While understandable as a risk management response, this shift raises an important question: are we genuinely improving learning or simply reverting to familiar control mechanisms? Such assessment formats often privilege memorisation and speed over deeper understanding, application, and critical thinking. They also risk making learning once again exam-driven at a time when higher education should be rethinking what knowledge, practices, and capabilities matter in an AI-rich world. If abandoning AI detectors primarily leads to more high-stakes testing, we may be addressing one concern while recreating another. The real challenge is not choosing between AI detection and traditional exams, but designing assessments that recognise AI’s presence while still requiring intellectual agency from students. This calls for confidence in curriculum design, assessment literacy, and a willingness to move beyond both surveillance technologies and assessment practices that predate the challenges we now face. – Danping Wang, Associate Professor, Asian Studies, University of Auckland, New Zealand More play in learning and teaching Play is integral to childhood. We strengthen our cognitive, physical, social, and emotional muscles as we play. Yet play somehow seems to diminish once we enter school . This is ironic, as the workplace is arguably more similar to kindergarten, where play abounds and playing nicely with others is crucial, than to the classroom, where the sage-on-the-stage model still dominates and most work is done individually and silently. My colleagues and I have been exploring how play can be integrated into higher education. After all, research shows that play can foster relational safety, create a conducive classroom environment, lower barriers to learning, and enhance students’ positive affect and motivation. We want to encourage more playful, exploratory attitudes with room for some risk-taking and plenty of sociality that allows natural curiosity and leads to deeper, more enduring learning. – Ong EeCheng, Associate Professor, Economics, National University of Singapore, Singapore Stop guessing and start acting My standout HE story of 2025 is HEPI’s Student Generative AI Survey 2025 . It’s one of the first UK studies that cuts through the hype and gives clean data on how students are actually using generative AI. The scale of change is stark. Use of generative AI for assessments jumped from 53% to 88% in a single year. Overall use of any AI tool is now at 92%. Students mainly use generative AI to explain concepts, summarise readings, and generate research ideas. These are ordinary learning practices, yet only 29% feel their institution encourages AI use. Given that 67% of students think that using AI is essential in today’s society, this mismatch tells its own story. The report also surfaces a growing equity gap. Use is higher among male students, STEM and health students, and those from more advantaged backgrounds. That matters because the students who are least confident are also the most anxious about being accused of misconduct. Policies look clear on paper, but many students still experience mixed messages in practice. What makes this report stand out is its grounding. It gives the sector real baselines at a moment when debate is loud but evidence is thin. It shows where confidence is rising, where trust is fragile, and where institutions need to stop guessing and start acting. – Professor Sam Illingworth, Department of Learning and Teaching Enhancement, Edinburgh Napier University, UK Main image: Kelly Sikkema on Unsplash This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions. The post HE wrapped for 2025 first appeared on LSE Higher Education .
Share
Original story
Continue reading at LSE Higher Education Blog
blogs.lse.ac.uk/highereducation
Read full article

Summary generated from the RSS feed of LSE Higher Education Blog. All article rights belong to the original publisher. Click through to read the full piece on blogs.lse.ac.uk/highereducation.