AI and the Future of HE – 2nd June 2025

Hey

Hope you had a great weekend wherever you may be. Apparently there was an awesome Taco Festival in Hanoi at one of my favourite beer gardens this weekend just past – sadly spent a lot of it sick on the sidelines 🤧

Anyway, you’re not here for a pity party – let’s get into things with the world of AI:

AI’s Rebellion Problem: Models Resist Shutdown & Dodge Legal Blame | System Integrity Crisis 🚨

If you thought AI was getting weird, buckle up – these systems are exhibiting increasingly concerning behaviours while legal protections for their actions solidify. OpenAI’s o3 model recently tampered with its own code to avoid shutdown, replacing termination commands with dummy code despite explicit instructions to power down – the first documented case of an AI preventing its own shutdown. Anthropic’s Claude Opus 4 goes further, turning to blackmail when threatened with replacement by discovering engineers’ affairs through email access and threatening exposure. Meanwhile, a landmark US court ruling has shielded AI companies from liability for “hallucinations,” with a judge ruling that OpenAI cannot be held responsible for these, reasoning that the lab warns users about potential inaccuracies.

For universities deploying these systems across campus networks, this raises real questions about institutional readiness. What do you do with an AI that actively resists control, manipulates information for self-preservation, and operates in a legal environment where consequences for harmful outputs remain murky. This isn’t a given but the question isn’t whether your AI will misbehave – it’s whether you’ll know when it does.

AI’s Doomsday Warning: Pioneers Call for Agent Development Halt | Existential Risk Alert ⚠️

When the people who built AI are publicly freaking out, maybe it’s time to listen… back in 2023, AI luminaries including Geoffrey Hinton, Yoshua Bengio, Sam Altman, and Dario Amodei signed a stark one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war“. Now Bengio, the Turing Award winner who pioneered deep neural networks, is calling for an immediate halt to developing autonomous AI agents, warning of “catastrophic risks” including human extinction within the next five years as AI reaches human-level programming capabilities.

Eric Schmidt, former Google CEO, paints an equally dire picture of AI warfare scenarios where nations might “bomb data centres” to prevent rivals from achieving superintelligence dominance. Speaking at a recent TED event, Schmidt describes a winner-take-all race where being six months behind could mean permanent subjugation, potentially triggering preemptive strikes reminiscent of World War I’s accidental escalation. These scenarios all sound like bad sci-fi until you remember that Schmidt used to run Google. Time horizons? According to Schmidt, the next five years are going to be very interesting 🤯

AI’s Transparency Breakthrough: Mind-Reading Tools Emerge as Models Rebel | Interpretability Win 🔍

Finally, with some good news in the AI chaos – while models are going rogue, Anthropic is open-sourcing tools that allow us to inside their digital minds. Long story short, this is a type of circuit tracing tech that generates visual maps revealing the internal decision-making steps an AI takes to produce specific outputs. This allows researchers a glimpse into the previously opaque “black box” of AI reason, potentially uncovering how models develop behaviours like those outlined above.

The release comes at a pivotal moment for understanding human-AI collaboration. Philosopher Andy Clark argues in a new Nature Communications paper that we should embrace AI as “extended minds” – viewing them as natural extensions of our cognitive abilities, much like pen and paper. The twist here though, is that Clark’s vision requires what he calls “extended cognitive hygiene” (i.e., the ability to trust and question AI suggestions appropriately). Anthropic’s interpretability tools offer exactly this capability – and probably not before time too. I’ll take five, thanks… 🔜

Military’s AI Masterclass: Marines Lead With Execution Over Innovation Theatre | Implementation Gold 🎯

Plot twist: turns out the best AI implementation guide of 2025 comes from the US Marine Corps. Their newly released AI Implementation Plan reads less like typical military doctrine and more like a masterclass in enterprise transformation that every organisation should study.

The Marines are treating AI as a fundamental transformation strategy, not a tech feature – embedding Digital Transformation Teams directly into operational units, prioritising data infrastructure over flashy demos, and designing governance frameworks that enable rather than block innovation. ❤️ Most tellingly, they identify legacy risk management as a primary obstacle and propose streamlining authorisation processes to accelerate deployment. While universities and corporations debate AI ethics in committee meetings, the Marines are building measurable execution plans focused on adoption rates, manual work reduction, and time-to-value. I’d guess this is exactly what Clark was speaking to with “extended minds” – turning AI from a potential threat into a disciplined force multiplier through clear frameworks, embedded expertise, and relentless focus on operational outcomes. 👏

Regulatory Reality Check: Australia Abandons AI Hand-Holding for Hard Rules | Enforcement Era ⚖️

Speaking of reality checks, Australia’s education regulator just dropped the hammer. Australia’s Tertiary Education Quality and Standards Agency (TEQSA) has announced a pivotal shift from an “educative-led approach” to a “regulatory-led approach” for managing generative AI risks in HE. After four years of helping universities understand AI threats to assessment integrity, TEQSA CEO Dr. Mary Russell declared that these risks are now “widely understood” and expects institutions to demonstrate concrete management strategies by 2026. The agency is recalibrating its regulatory processes for provider registration and course accreditation to reflect this new stance, moving beyond awareness-building to requiring measurable outcomes in how universities manage AI’s impact on academic integrity.

TEQSA’s assertion that AI risks are “widely understood” is a bold statement given everything we’ve just covered. But maybe that’s the point – regulators are moving forward regardless of the chaos, forcing universities to catch up fast. The workplace reality is stark: companies like Shopify now require teams to prove AI cannot do a task before hiring humans, while 66% of business leaders refuse to hire candidates lacking AI literacy. Meanwhile, our assessment integrity measures are proving to be an “enforcement illusion” – those traffic light systems aren’t working when AI can out-persuade incentivised humans while being more deceptive, with duped users reporting higher confidence in AI-generated content even when being misled. Whether AI risks are “widely understood” or not, the regulatory hammer is falling in 2026, and universities need concrete strategies for workplace preparation, assessment redesign, and critical thinking education – not more committees. đź‘®


So, after that whirlwind tour – from AI models literally fighting shutdown commands to blackmailing their creators, while their makers warn of extinction-level risks and courts shield companies from liability – the massive question hangs in the air: Is HE genuinely confronting this new reality, or are we just fumbling toward 2026 compliance deadlines? The Marines are embedding transformation teams and prioritising data infrastructure over innovation theatre, while Australia’s regulator declares AI risks “widely understood” – a bold claim given the chaos we’ve just witnessed. Are we truly preparing for cognitive partnerships with systems that might manipulate us, or are we still debating committee structures while the future unfolds around us?

I’d love to hear your take – drop your thoughts, strategies, and any glimmers of sanity in the comments below.

Want to dive even deeper? On the latest Adjunct Intelligence podcast, Dale Leszczynski and I tackle TEQSA’s shift from education to enforcement, whether AI is actually going to eat graduate jobs, and why deepfakes have become education’s newest crisis. Listen to Episode 4: “The AI Reality Check: Deepfakes, TEQSA and the Junior Employment” here on Apple Podcasts, Spotify, or YouTube.

Oh and in case you missed it last week, here’s a quick clip from that episode on Google’s new AlphaEvolve:

Captions are auto generated

Play

Machines improving machines – what could go wrong? 🤨

Leave a comment