AI and the Future of HE – 17th March 2025

Hi

Hope your weekends were good, one and all.  It’s nice, quiet, and chilled up here for a change as we watch the weather clear up and we start to move into the business end of the year.

Anyway, a few notes from the world of AI for your Mondays:

UK Government’s AI Gambit: When Politicians Ask ChatGPT for Policy Advice! | AI Governance 2025 🇬🇧

In January, the UK government announced that AI will be “mainlined into the veins” of the nation.  Fast forward a month or two and the UK government is making bold moves in AI adoption, with PM Keir Starmer betting £45 billion on AI-driven efficiency gains and a mandate that no human should perform tasks AI can do “better, quicker and to the same high standard“. But in a timely twist, Technology Secretary Peter Kyle has been caught consulting ChatGPT for actual policy advice! Thanks to an unprecedented Freedom of Information request, we now know Kyle asked the AI why UK businesses have been slow to adopt AI – and even which podcasts he should appear on to discuss tech policy.

This revelation raises profound questions about AI’s role in democracy.  When officials ask ChatGPT about AI regulation, they’re effectively consulting tools built by the very companies they’re meant to regulate – a serious conflict of interest.  As one commentator aptly noted: “If our public officials rely on chatbots for ideas about managing digital tech, they’re deferring to tech companies to set the agenda”. The precedent is clear: government AI consultations are now subject to public scrutiny, but the deeper question remains – should these systems trained on internet data influence public policy more than actual human constituents? 🤔💭

Digital Twin Revolution: AI Agents Replicating Human Behaviour at Scale! | Research Trends 2025  🧪

Imagine having real-time insights into how millions will vote, what products they’ll buy, or how they’ll react to policy changes – all without conducting a single human interview!  This fascinating frontier in AI development involves creating synthetic research populations: AI agents normalised on human demographic data drawn from census information, credit card behaviour, social media, and more. Stanford researchers demonstrated this potential in their groundbreaking study “Generative Agent Simulations of 1,000 People,” where these digital twins replicated survey responses with remarkable 85% accuracy compared to humans retaking their own surveys two weeks later! Instead of waiting months for traditional survey results with limited signal, these systems promise continuous feedback streams at a fraction of the cost and time. 🤯

The philosophical implications are mind-bending! When AI knows us as well as (or better than) we know ourselves, what does this say about human predictability and free will? Scepticism remains strong with polling expert Nate Silver calling this “the single worst use case for AI” he’s ever heard. But as these digital simulations converge closer to ground truth predictions, are we witnessing the future of research or teaching algorithms to fabricate opinions? 💭✨

Mind the Gap: When edTech Companies and Educators Speak Different AI Languages! | Education Research 2025 🔍

In a groundbreaking study from Cornell University, researchers have uncovered a critical disconnect in how AIs are being implemented in education (Spoiler: the tech providers and educators are focusing on different things)!  In a series of interviews, three categories of potential harms were identified; technical (bias, privacy violations, and hallucinations), human-AI harms (academic integrity issues), and broader impact harms (inhibiting learning, decreasing autonomy, etc).  Turns out the tech people are focussed more on technical issues while educators are more concerned with broader impacts – and while educators feel equipped to address technical harms through teaching practices, they feel they need support with those broader impacts.

Real talk: this research perfectly complements the “AI Unfiltered” initiative we are kicking off at our RMIT Community of Practice – seeking to create spaces for exactly these kinds of nuanced conversations about AI.  As pro as I/we can sound around these tools and this ever-changing environment, it’s essential to take a level look at the uncertainty – and the outright existential challenges they bring. By bringing together diverse perspectives and embracing intellectual humility, we can foster the critical engagement needed to navigate AI’s uncertain implications in HE.  Rather than forcing optimism or pessimism, AI Unfiltered encourages evidence-based exploration that acknowledges both the transformative potential and legitimate concerns about these technologies.  💡✨

Beijing’s Education Revolution: Mandatory AI Curriculum Launches Citywide! | Global Education Trends 2025 🎓

China’s absolute AI tear has moved from the labs into the classroom!  Beijing has announced that all primary and secondary schools will implement mandatory AI education beginning autumn 2025 (original Chinese source here and English overview here from the excellent Fengchun Miao (UNESCO).   Released earlier this month, the “Work Plan to Promote AI Education in Primary and Secondary Schools (2025-2027)” will impact millions of students across Beijing.  Every student will receive at least 8 hours of AI education a year, with flexibility on approach (i.e., standalone courses or integrated content)  sitting with the schools.  What makes this initiative particularly interesting is its strong alignment with UNESCO’s AI competency framework – potentially setting a global precedent for how AI literacy can be systematically embedded in national curricula. 👏

I used to live in Beijing back in the day. It’s an amazing place and I can tell you they never do things by halves – including here. The city government’s plan includes developing a government-validated AI Education App Store and launching an ambitious teacher training programme. While many countries are still debating whether AI belongs in classrooms, Beijing is moving decisively to prepare its next generation for an AI-powered future. As these students graduate into the workforce over the coming decade, will this create a significant competitive advantage? Looks like the global race for AI literacy has officially entered the classroom, and Beijing has taken an early lead. 🌏💡

Beyond Interfaces: How AI Protocols Are Democratising Creative Software | Tech Innovation 2025 ✨

Love 3D, hate Blender?  Struggling with After Effects?  Surrounded by Unity manuals but not getting anywhere?  Claude’s Model Context Protocol might be just the thing for you. Essentially it’s a way you can use normal language to interact with complex software – effectively bypassing tricky interfaces and skipping the whole learning curve. While it was released a while back, there are an increasing number of integrations available – sorry, no AE or Unity yet (got carried away with my examples there) but Blender MCP is here and it’s amazing!

Or how about this this beautiful mashup/workflow between Claude + Magnific + Runway. Yes, I know Veo 2 is better out the box but that restyled video could genuinely be sitting somewhere gorgeous in the Southern Alps near Wanaka, New Zealand.


The AI revolution isn’t coming; it’s already here. The path forward isn’t about choosing between resistance or wholesale adoption, but crafting thoughtful integration that enhances human potential. The most successful institutions will move beyond reactive policies, asking not “how do we prevent AI misuse?” but “how might AI transform what’s possible?” Those who experiment, learn, and adapt now will thrive in our AI-augmented future. The most exciting chapters of this story remain unwritten – how will you contribute to this unfolding story?

Leave a comment