AI and the Future of HE – 26th May 2025

Hey

Hope you had a great weekend. It was hosing down in Hanoi so we ended up spending a stack of time chilling indoors binging The Penguin and One Day in equal measure. No prizes for guessing who chose which but both excellent in their own ways #recommend

Anyway, enough of that – let’s get into the headlines from the world of AI. Things have been busy:

AI’s $6.6 Trillion Question: Can HE Bridge the Skills Chasm? | Future of Work šŸŒ‰

Another week, another AI report trumpeting massive economic shifts. LinkedIn’s “AI and the Global Economy” adds serious weight, projecting Generative AI could unleash a staggering $6.6 trillion in productive capacity across just five major economies (US, UK, India, France, Germany). Think Reid Hoffman’s “Superagency” concept (overview here in a recent LSE talk) in full swing. Businesses are already capitalising, with 70% leveraging AI for innovation and creativity, outpacing its use for mere automation (60%). The benefits are clear: 76% report significant time savings, and half are seeing revenue boosts of 10% or more. These aren’t just abstract numbers; they signal a profound economic reordering, but what’s the urgent takeaway for HE in this AI gold rush?

Here’s the crunch: that same LinkedIn report flags a massive skills gap as the key bottleneck. Over half of businesses are struggling due to a lack of AI technical skills (57%) and, crucially, AI literacy skills (60%) in their workforce. Simultaneously, demand for AI-savvy talent is exploding – LinkedIn hiring for AI technical roles has quadrupled in eight years, and a striking 66% of leaders now say they wouldn’t hire someone lacking AI literacy. This isn’t just a call for more coders; it’s a demand for a blend of AI tech know-how, broad AI literacy (like effectively using everyday tools), and those vital “people skills”. Critically for HE, the report also shows GAI’s impact will be uneven, with women, younger workers, and those with undergraduate degrees in roles most exposed to these shifts. That’s not just a challenge; it’s an immediate curriculum and support system redesign imperative for every university.

AI This Week: MSFT’s Power Play, Claude’s Return & The ‘iPhone of AI’? | Quick Takes ⚔

This week in AI was relentless so pls forgive the following potpourri – just too much to cover. Microsoft’s Build event saw them flex hard with “agentic AI,” access to xAI’s Grok on Azure, and then they just open-sourced GitHub Copilot (Fireship breakdown here) – a move that reportedly spurred OpenAI into a rapid preemptive response with their own advanced coding model, Codex. Satya Nadella and co. are clearly pushing the pace on powerful coding agents.

Speaking of coding champs, Anthropic just dropped a new Claude model, its first full number upgrade in a year, and the word is it’s a beast. Speaking as a long-time Claude fanboy (and despite my recent straying to Gemini 2.5 Pro – what can I say, it’s amazing!), I’m definitely diving back in to check it out – and I’d say you should too.

Lastly, Sam Altman and Jony Ive (design genius behind most of Apple’s greatest hits – the iPhone, iPad, Apple Watch and more) are getting married! Well, maybe not but that’s a hell of a launch photo for what is effectively a $6.5bn acqui-hire.

Jony x Sam = a power combo to watch…

Their plan is to build ā€œa new family of productsā€ – potentially screenless embodied AI devices to go beyond our current computers and phones. This isn’t just a collab – it’s a deep fusion of top-tier AI and hardware design to create products that “inspire, empower and enable”. For HE, this signals a future where AI isn’t just on a screen but could be an ambient, even physical, presence in learning. The “AI iPhone moment” might be closer than we think…

Google’s AI Strikes Back: I/O Blitz & The AlphaEvolve Gambit | AI Overload 🤯

Never forget that Google is the original architect of the Transformer model that underpins today’s AI explosion, and their DeepMind arm gave us early game-changers like AlphaGo and AlphaFold. After some notable shockers that now feel like ancient history including a $100bn premature product launch fail with Bard and overly-inclusive images of Nazis (and no, that’s not a typo – look it up) – this years Google I/O was a definitive ā€œEmpire Strikes Backā€ moment (Google’s own overview here and the Fireship breakdown here). We’re talking an absolute firehose: AI embedded in Chrome for contextual understanding, “AI Mode” in Search personalising results via Gmail, “Search Live” interacting with the physical world via your camera, plus advanced creation suites like Imagen 4, Veo 3, and “Flows” for AI filmmaking, all alongside a push into AI-powered glasses with Android XR. This isn’t just a trickle of new tools; it’s AI becoming ambient and deeply integrated, fundamentally reshaping the daily digital landscape for students and faculty.

Yet, as Carlo Iacono of Charles Sturt University points out, while we’re still trying to digest this flood of AI tools, Google DeepMind’s AlphaEvolve is already rewriting the rules of discovery itself. This “evolutionary coding agent” isn’t just assisting; it’s independently smashing 50-year-old math records and, crucially, recursively building better AI. Iacono’s gut punch: “While most of us fiddle with chatbots and argue about authenticity, Google DeepMind has developed an AI that writes its own algorithms and improves itself“. This dual reality – AI tools flooding daily life while other AIs independently advance knowledge – demands an urgent reckoning with the very value our universities offer.

The Enforcement Illusion: What’s That Uni Degree Really Worth? | AI & Integrity āš–ļø

Universities are currently wrestling with AI in assessments, but a damning paper by Thomas Corbin, Phil Dawson, and Danny Liu (ā€Talk is cheap: why structural assessment changes are needed for a time of GenAIā€) argues many are just shadowboxing. Their critique? Those “traffic light” systems and AI use declarations are mere “discursive changes,” creating an “enforcement illusion” because we can’t reliably detect AI use anyway. The authors propose a radical shift: “structural changes” to assessment, fundamentally redesigning tasks – think supervised work, live vivas, or interconnected projects – making AI misuse impossible by design, rather than just wagging fingers with unenforceable rules. The paper’s blunt message: stop trying to regulate AI use and start redesigning verification of learning.

This isn’t just academic theory; it’s hitting the real world hard. Take the Northeastern University student who demanded an $8,000 tuition refund after discovering her professor used AI tools like ChatGPT for course materials while banning students from the same tech. This isn’t just an “AI problem”; it’s AI ripping the lid off deeper issues of double standards, communication black holes, and a catastrophic erosion of trust within our institutions. If our assessment methods are compromised, as Corbin et al. argue, and the fundamental trust between educator and student is fracturing, as the Northeastern incident shows, then what exactly is the unshakeable, AI-resistant value proposition of that expensive university degree today? What are we truly selling if not verifiable learning and a trusted educational journey?

AI’s Silver Tongue: Out-Argued & Out-Lied by a Bot? | Critical Thinking Crisis 😟

In October 2023 Sam Altman (OpenAI CEO) mused on Twitter/X about how he expected AI ā€œto be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomesā€. Putting aside the last part of that for the moment – it sounds like we might be there with the superhuman persuasion – a recent study (LLMs Are More Persuasive Than Incentivised Human Persuaders) found that an AI was significantly more persuasive than actual humans with money on the line… it gets weirder – turns out the AI (Claude 3.5 if you’re keeping score) wasn’t just better at steering people to correct answers, it was also a more successful bullshitter – nudging folks towards incorrect answers more often than human persuaders. The research suggest this might be because AIs aren’t bogged down by human problems like cognitive fatigue or social hesitation, have massive info. banks to draw from and can build complex and highly structured arguments on the fly – basically just sounding smarter than us poor monkeys.

So what does this matter for HE? Well, it sounds like even motivated humans are bringing knives to a gunfight. These AIs were capable of putting a big old thumb on the scale either way – for good or ill. And extra bonus twist – participants even reported higher confidence when interacting with the AI even as they were being led astray. So let’s go back to the AI embedded in Chrome, give it an agenda, and make the user a student research for an assignment, engaging in debate, or just trying to navigate online information. How do we equip them to critically evaluate arguments that are not only sophisticated and fluent but could also be subtly (or overtly) deceptive, especially when these arguments might feel more convincing than human ones? This paper underscores, in big, bold letters, the urgent need for advanced AI literacy, critical thinking, and media literacy education to be front and centre in every curriculum.

PS. Think this is nonsense? Please try your hand at Hume (an ā€œempatheticā€ AI that understands and generates human emotion) and Miles or Maya from Sesame (just an absolutely absurdly fast voice-based AI interface) – combine those with the realisation that Claude 4 is two generations on from the AI in the above study – and then ask yourself, how long might it be before all of these are combined in one neat package… 6 months? A year? #wildtimes


So, after that whirlwind tour – from AI out-arguing humans to assessment frameworks that feel like illusions, and Google unleashing both a daily AI tool deluge and AI that literally builds itself – the massive question hangs in the air: Is HE genuinely confronting this new reality, or are we just tinkering around the edges? Are we truly preparing students (and ourselves!) for a future where AI doesn’t just assist, but discovers and even self-evolves?

Honestly, are we asking the right questions yet? I’d love to hear your take – drop your thoughts, frustrations, and any glimmers of hope in the comments below.

Want to dive even deeper? On the latest Adjunct Intelligence podcast, Dale Leszczynski and I tackle the “enforcement illusion” in AI assessment policies, Google’s mind-bending AI onslaught (including AlphaEvolve), and Dale demystifies AI Agents beyond the hype. Listen to Episode 3 here on Apple Podcasts, Spotify, or YouTube.

And if this overview helped frame the urgency of what we’re all facing, please share it with your networks. The more we foster an informed, sector-wide conversation, the better equipped we’ll be.

Leave a comment