AI and the Future of HE – 19th May 2025

Hi

Hope you had great weekends one and all. The weather in Hanoi has been spectacular and I spent a lot of the weekend poolside – while also balancing the ongoing integration challenge of the spare cat we seem to have inherited 🙀

Speaking of new intelligences, enjoying the newsletter but looking for more of a deep dive on this and related content? Definitely check out our companion podcast Adjunct Intelligence. Join Dale Leszczynski and me as we set out on a weekly exploration of what’s new in AIxHE – only two episodes in and we’re already big in Finland (Hei kuuntelijamme Suomessa! 🇫🇮🤩).

Play

Available on all your usual podcast platforms – links (and a sneaky sample) at bottom

Anyway, shameless plug done – let’s get into the headlines from the world of AI:

AI Reshapes Work: Microsoft’s “Frontier Firm” and Real-World Impacts | Workplace AI ⚙️

You’ve heard pundits and analysts talking about the changes AI will bring to the business landscape, knowledge work, and education in similar terms to the Industrial Revolution – my own go-to comes from Deloitte, comparing these changes to the dawn of the Internet. Microsoft and LinkedIn’s 2025 Work Trend Index Annual Report is a new addition to this chorus of voices – drawing on global surveys, Microsoft 365 telemetry, LinkedIn labor trends, and insights from AI-native startups and economists. Super interesting timing with MS’s recent decision to lay off 7k employees to reallocate resources for robust AI investment. Either way, the report makes no bones about the urgency of this moment – revealing that 82% of leaders believe this year is critical for rethinking core strategies and operations in response to AI’s rise.

Key to the report is the idea of the”Frontier Firm,” an organisation built around readily available “intelligence on tap,” dynamic “human-agent teams,” and a new role for employees as “agent bosses”. The open challenge here for education, of course – is what might a “Frontier University” or academic department look like? The question is worth asking, as these firms are reportedly seeing significant success, with 71% of their workers stating their company is thriving, compared to only 37% globally. While human creativity and ambition will drive new economic value, this transition is profound. Microsoft’s core message emphasises immediate action, posing a critical question to all of us in HE and education more broadly: “How do we adapt our curriculum, pedagogy, research, institutional frameworks… just about everything we do?”

AI’s Medical Check-Up: Data Tsunamis & Sci-Fi Futures? | Health AI 🩺

Ever wondered what happens when AI gets its hands on a nation’s health data? The UK’s NHS is showing us, training its “Foresight” AI on a mind-boggling 57 million anonymised patient records! This isn’t just a pilot study – this is a data tsunami aimed at predicting everything from heart attacks to hospital surges, and potentially transforming how we approach public health and allocate precious resources in the process (with an enormous open invitation for academic research and policy analysis 🤓). The current work-on is reportedly COVID research but then imagine that power – an AI that’s learned from nearly everyone, spotting patterns invisible to the human eye. This is a massive step (prompting valid questions about data governance, algorithmic bias, and the future of epidemiological research), but what truly mind-bending AI health futures are just over the horizon?

Get ready, because things are about to get even more sci-fi. Picture this: Tsinghua University’s “Agent Hospital,” a virtual world where AI doctors treat AI patients, evolving at lightning speed. Ambitious? Absolutely, and a stunning example of advanced simulation that could inform how we train future healthcare workers/model complex systems! Then there’s OpenAI’s “HealthBench”, a high-stakes benchmark designed by hundreds of real-world doctors to see if AI health chatbots are actually helpful and, crucially, safe. There are open questions about this research but – going back to the internet in the 90s moment – remember when banking started to slowly move across onto the internet? This feels like the beginning of that… and if something as important as healthcare makes the jump to “yeah I trust AI with this” – then what are we going to keep back?

Google Extends Gemini AI to Under-13s: Innovation Meets Child Safety Concerns | Digital Parenting 🛡️

Google is making its Gemini AI chatbot available to children under 13 through parent-managed Family Link accounts, starting in North America, with 🇦🇺 to follow later this year. While Google states children’s data from these interactions will not be used to train its AI models, and access will be “on” by default, the company itself warns that Gemini can “make mistakes” and may expose children to undesirable content. Parents are advised to discuss critical thinking, fact-checking, and the non-human nature of AI with their children, and they will be notified upon their child’s first access, with the option to disable the feature.

This rollout has prompted significant concerns from child safety experts and academics, highlighting that children may struggle to differentiate AI-generated content from truth or AI interactions from human ones. Organisations like the eSafety Commission and UNICEF warn that AI chatbots can provide harmful or misleading information and manipulate young users, who are still developing critical thinking skills. The move also underscores the ongoing challenge for parents in managing new technologies beyond social media, prompting renewed calls for a “digital duty of care” legislation in places like Australia, where such proposals are currently stalled, unlike in the EU and UK. It goes without saying that this is wildly irresponsible on the part of Google – unleashing this on unsuspecting, unprepared parents – hopefully I’m wrong but will be watching with some trepidation 🫣

Beyond “Friendly” AI: Demanding More Perspective | Critical AI Use 🛠️

I do miss Sychophant-GPT from time-to-time – tho mine was a bit more vanilla than the version that apparently told a user that their plan to sell “shit on a stick” was “genius” 🤣  That said, love the piece by Mike Caulfield in the Atlantic tearing into these tools as “justification machines” that reinforce user biases rather than expanding understanding. This makes current AI, designed with “personalities” to “match the user’s vibe,” potentially more dangerous than social media in confirming even ill-advised notions, effectively destroying AI’s true potential by offering “opinions from nowhere”. Oh and especially dangerous when considering the above from Google…

The alternative, Caulfield suggests, is to view AI as a “cultural technology” or a modern “memex” – an interface to the enormous, often contradictory landscape of human knowledge as opposed to an oracle with opinions. He argues AI should connect users with source perspectives (as opposed to untraceable “information smoothies”), allowing them to work through this complexity with full knowledge of idea provenance. To this end, Caulfield has developed a “SIFT Toolbox,” a detailed prompt designed to make LLMs act more like research assistants that systematically source conflicting viewpoints and reduce hallucinations. Well worth a look – personally had a good look and broke it down with Gemini and Claude both – consensus is it’s an interesting addition. Ultimately, Caulfield calls for AIs with “less personality and more perspective” – delivering human expertise – a wonderful principle supporting the development of future critical thinkers.

AI Image Consistency: Midjourney & Runway’s New Groove | Creative AI 🎨

Been a minute since we looked at the creative suite but they’ve not been idle. Both Midjourney and Runway have gotten into image consistency in a big way. Midjourney’s Omni-reference gives you the ability to lock in characters, objects, props – even whole visual motifs – from a single reference image! Super interesting with some very exciting use-cases for creatives and beyond – worth a look. Runway Gen-4 references allow you to combine location, 3D assets (as object references), and distinct style references – paving the way for some seriously cool video outputs from what feels like next to nothing in terms of traditional input.

For HE, this surge in visual consistency is a game-changer. It unlocks the potential for crafting entire narrative series with recurring, recognisable characters for diverse courses like literature or history; developing dynamic, consistent training simulations for fields such as medicine or engineering; and empowering media arts students to produce sophisticated short films or detailed virtual environments with much greater stylistic control and far less friction than before. Essentially, it’s about supercharging visual storytelling, research communication, and creative pedagogy across the disciplines. 🤩


So, where does all this leave us?

It’s clear AI is far more than just the latest tech buzz – it’s a fundamental force reshaping our work, our creativity, our health systems, and even the ethical questions we face daily. For those in HE, this isn’t a time for passive observation. It demands active engagement: to critically dissect these tools, to innovatively and responsibly weave them into our teaching and research, and to equip our students to not just survive, but to help shape an AI-powered future. The ongoing AI story presents both significant challenges and profound opportunities for HE to lead with clarity and purpose.


Want a taste of how we break these topics down on the Adjunct Intelligence podcast?

Here’s a quick dive into the “Frontier Firm” concept and AI’s impact on the future of work, which we touched on above. Hit play for a sample from our latest episode with Dale Leszczynski!

Want to hear more? You can find full episodes of Adjunct Intelligence wherever you get your podcasts! 🎧

Leave a comment