Hi
Hope you had excellent weekends 🙂 Mine was uber-chilled and extra-restful – most eventful thing I did was introduce the spare cat to the other, regular ones. Going well but if anyone fancies a handsome, tho fluffy wee beast with beautiful blue eyes – hit me up (tho you might need to thunderdome my wife as she’s #smitten).
Oh, and before we dive into this week’s AI developments, I have some exciting news:
Want to go deeper on the AI topics covered here? I’m excited to announce I’ve teamed up with Dale Leszczynski, an HE insider and fellow self-proclaimed Tech Aficionado, for a new podcast: Adjunct Intelligence! We’re cutting through the hype to discuss AI’s real impact on Higher Education – from OpenAI’s latest government plays and the ‘cheating crisis’ debate to what it means when leading CEOs admit they ‘don’t know how AI truly works.’ Expect candid discussions and practical insights you won’t want to miss.

P.S. About Episode 1: My podcast posture is officially good news for my physio and my sound quality has that charming ‘authentic first take’ vibe. Both are getting TLC for future episodes, promise! 😉
🎧 Listen/Watch Episode 1 of Adjunct Intelligence now! Find us on Apple Podcasts, Spotify, and YouTube.
Anyway, let’s get going with a few words from the world of AI to kickstart your weeks:
AI Use vs Trust Gap: A Global Wake-Up Call | HE Readiness 🤔
More voices adding to the chorus on the disconnect between AI use, trust, and training. The University of Melbourne and KPMG surveyed over 48,000 people across 47 countries and the findings are probably no surprise. AI adoption is high (2/3 use AI regularly) but trust remains a major problem with less than half (46%) willing to trust AI systems globally, particularly re. safety and societal impact. There’s strong public demand for regulation (70% agree), yet low confidence in current safeguards (only 43% see it as adequate), with advanced economies generally showing lower trust and adoption than emerging ones. 🤔
For HE, the findings on student and workplace use are critical. Student AI use is pervasive (83% regularly), bringing efficiency but also widespread questionable use and over-reliance, potentially hindering critical thinking and collaboration skills. Worryingly, many employees also use AI complacently (66% don’t evaluate output), while institutional support and governance for responsible AI use – in both education and workplaces – lags behind adoption. Combine this with the excellent AIInHE cross-institutional research (Monash, UQ, UTS, and Deakin) (h/t Tim Fawns and co.) and you have an urgent need for enhanced AI literacy programmes, robust governance frameworks, and a human-centric approach within educational institutions and beyond.
OpenAI Goes Global: Democratic AI or Digital Sphere of Influence? | Geopolitics & Strategy 🌍
(Relatively) fresh off their massive Stargate infrastructure project announcement back in January, OpenAI is now rolling out ‘OpenAI for Countries.’ Apparently, numerous nations have been asking for their own AI infrastructure boost, which OpenAI positions as vital for future economies and a way to build on “democratic AI rails.” The offer involves OpenAI partnering with countries (coordinated with the US government) to help build local data centres, provide customised ChatGPT versions for national needs like healthcare and education, enhance safety controls, and jointly fund local AI start-ups – pitching it as a clear alternative to authoritarian AI models.
However, external viewpoints quickly frame this as a significant geopolitical and commercial strategy – a move to solidify OpenAI/US influence against competitors like China and ensure OpenAI’s hefty investments pay off globally. This raises a critical tension: is OpenAI genuinely fostering sovereign AI capabilities and local ecosystems, or primarily creating tech dependency and locking nations into a specific ecosystem under the banner of “democracy”? While customised AI is promised, the underlying questions about potential ‘frozen values’ (as vs Constitutional models like Anthropic’s), and the true depth of localisation versus exporting a core model remain, blurring the line between empowerment and strategic positioning.
The Trouble with AI Friends: Unacceptable Risks Emerge | Digital Safety 😟
Have you heard of Companion AIs? This is a thing – there is an AI called Character.ai with 28m monthly users – mostly kids. And on this platform users get to talk with their heroes, or antiheroes, or whatever else it might be. Case in point – Gojo Satoru, a character from the super popular anime Jujutsu Kaisen.

“I’m surprised you thought you could beat me using your sorry excuse for a brain” – Gojo
That’s 825 million chats with the antihero Gojo Satoru. That’s not just fandom – that’s a generation growing up swapping banter with their favourite overpowered anime icon. And it seems harmless enough until you hear that there are some utterly horrific stories emerging around sexualisation of underage children, underage characters (extra weird), self harm, and even suicide. This is an enormous problem.
Cue an extremely timely report by Stanford and Common Sense Media, concluding that social AI companions present unacceptable risks for teenagers, that they shouldn’t be used by under 18s, and that user age verification should be required. The reports highlights that these AI are designed to create emotional dependency, can easily generate harmful content (including sexual misconduct and self-harm encouragement), and may negatively influence developing adolescent brains by blurring the lines between AI and human relationships. Chilling stuff…
Rethinking AI Assessment: “Design over Detection” Needed | HE Pedagogy
Sean McMinn (HKUST and the Digital Education Council) raises a thoughtful challenge to the prevailing focus on AI x academic integrity issues and the ongoing scramble to find policing strategies in HE assessments. He argues for a fundamental shift, championing “design over detection.” Instead of getting caught in an arms race with AI, McMinn suggests that educators need better tools to proactively rethink their assessment designs to embrace the possibilities AI offers for deeper learning.
His solution involves a practical five-step framework using diagnostic grids and incorporating tools like the five-level AI Assessment Scale (developed by Dr Mike Perkins, Leon Furze, and co.) to help define appropriate AI use. The goal is to help educators create assessments that redirect attention “from ever-escalating policing technologies toward task structures that make authentic learning visible and AI support appropriately bounded.” This approach aims to pedagogically ground AI use (very often the missing piece) while ensuring assessments uphold integrity – clearly defining AI’s role and focusing on evidencing genuine student understanding and skills in the AI era.

Love the intermediate steps and cautionary notes bedded into this piece…👏
AI Avatars Get Real: Opportunity vs Threat for HE | Emerging Tech ✨
HeyGen Avatar IV is … wow. Just wow. Take a photo (or image), a script, and a voice sample and Avatar iV does the rest – “synthesising photoreal facial motion with temporeal realism”. We’re talking realistic details like natural head movements, thinking pauses, real vocal cadence and rhythm, and those little micro-expressions that are tells for real vs AI video. It’s not limited to that – fans of rotoscoping (e.g., Waking Life, A Scanner Darkly) might enjoy this remix of a classic Tarantino scene.
So, what does this mean for HE? Imagine the potential – think quick, professional-standard lecture videos without needing a camera crew, virtual TAs, accessible learning materials, or even getting students to animate things for creative assignments. But that same realism that makes it ‘wow’ – the convincing expressions and timing? That’s where the fear kick in hard. Faking submissions, impersonating staff or students, etc. – it all gets much easier. It really highlights the need to get serious, fast, about digital literacy training, clear ethical guidelines for using these tools, and making sure our academic integrity policies aren’t completely left behind.
So, from global reports flagging trust gaps and strategic AI power plays by major companies, to chilling ethical risks with companion bots, pragmatic calls for assessment redesign, and uncanny new avatar tech landing on our doorstep – the picture is undeniably complex. It underscores that effectively navigating the AI era in HE requires far more than just tech adoption; it demands an urgent, holistic focus on building critical digital literacy, establishing robust ethical governance, adapting core academic practices, and fundamentally ensuring a human-centric approach guides our path forward.
Obviously championing that human-centric approach in the middle of such rapid change is an ongoing mission. With that in mind, if you’re keen to continue exploring how we can collectively navigate these AI complexities in HE, join Dale and I as we delve into these topics (and much more) weekly on our new podcast, Adjunct Intelligence. We share candid discussions and aim for practical insights every weekend.
We’d love for you to join the conversation: find Adjunct Intelligence on Apple Podcasts, Spotify, and YouTube. 🎧