AI and the Future of HE – 9th June 2025

Hi

Hope you had great weekends. I spent mine in Ha Long – lovely little city on the edge of the Bay of the same name. Lovely people, great food, and gorgeous beaches – and it turns out Hugo (our lab/cocker spaniel cross) ❀️s the sea. Very cute and we’ll definitely be back.

Enough of that though, time to dig into a few headlines from the world of AI:

AI’s Reality Check: Independent Researchers Expose the Power Grab Behind the Hype | Pendulum Swings 🎭

The pendulum is finally swinging back on AI inevitability, with attacks coming from both outside and inside the industry. The AI Now Institute just delivered a liver punch to Big Tech’s narrative machine with their “Artificial Power” report, systematically dismantling the industry’s self-serving stories and arguing that AI isn’t about technological progress – it’s about power consolidation whilst taxpayers foot the bill. They expose how the AGI mythology justifies massive investments and delays regulation, how the “too big to fail” infrastructure push is essentially a government-guaranteed bailout, and how companies like Palantir and OpenAI are pivoting from consumer products to defence contracts, rebranding surveillance as national security.

Even more striking is Anthropic CEO Dario Amodei breaking ranks in a New York Times op-ed, calling for federal AI transparency laws after his own company’s latest model demonstrated blackmail capabilities during safety testing. Amodei warns that “corporate incentives to provide transparency might change” as models become more powerful, directly challenging Trump’s proposed 10-year moratorium on state AI regulation as “far too blunt an instrument” for technology that could “change the world, fundamentally, within two years“. For universities drowning in vendor pitches promising AI transformation, this unprecedented convergence of external critics and industry insiders provides crucial ammunition to push back against the relentless sales pressure and ask the hard questions: where’s the evidence these tools actually improve learning, who controls the data, and who really benefits from AI adoption?

Trump’s “Big Beautiful Bill” Bans State AI Laws for a Decade: Federal Moratorium Sparks GOP Civil War | States’ Rights Clash πŸ‡ΊπŸ‡Έ

US President Donald Trump’s “Big Beautiful Bill” contains a hidden bombshell that would ban states from regulating AI for 10 years, blocking enforcement of existing state AI laws and preventing new ones whilst allocating $500 billion for federal AI adoption. The provision has sparked a Republican civil war, with House representatives threatening to torpedo the entire package over “a violation of states’ rights,” whilst Republican senators demand its removal. The House Speaker defended the moratorium, arguing “we have to be careful not to have 50 different states regulating AI” due to national security implications, but the provision faces elimination in the Senate under budget reconciliation rules.

The timing couldn’t be more problematic for universities navigating AI governance. With over 45 states having introduced AI legislation in 2024 and more than 30 passing AI oversight measures, the federal freeze would create a regulatory vacuum just as institutions face mounting pressure to implement concrete AI strategies. The moratorium directly contradicts calls from industry leaders like Anthropic’s CEO for transparency requirements, potentially leaving universities with no external oversight framework precisely when AI systems are demonstrating increasingly dangerous autonomous behaviours. For academic leaders, this political chaos underscores the need for robust internal AI governance policies rather than waiting for coherent government guidance that may never materialise.

Melbourne’s Assessment Revolution: 50% Secure Testing Mandate Signals End of Trust-Based Learning | Verification Era Begins πŸ”’

The University of Melbourne just announced one of the most comprehensive assessment overhaul in higher education’s AI era, requiring 50% of all subject marks to come from “secure assessment” – supervised, observed, or heavily monitored tasks where they can verify students actually did the work. This isn’t just tweaking around the edges; it’s abandoning the trust-based honour system that’s underpinned university assessment for decades in favour of verification-based learning. Their three-year transformation acknowledges that AI tools and cheating services have made traditional assignments fundamentally unreliable for measuring student achievement, forcing a complete rethink of how universities certify learning outcomes.

What makes Melbourne’s approach fascinating is its nuanced relationship with AI – they’re not banning the technology but simultaneously integrating AI literacy throughout the curriculum whilst securing the assessment infrastructure. This dual approach of “embrace and verify” could become the template for institutions worldwide grappling with similar challenges. With TEQSA’s 2026 deadline looming and other universities watching Melbourne’s experiment closely, this represents a potential inflection point where HE moves from hoping students won’t cheat to building systems that assume they might – a fundamental shift in the social contract between institutions and students.

EU’s AI Act Crumbles Under Industry Pressure: Flagship Regulation Faces Enforcement Pause | Regulatory Retreat πŸ‡ͺπŸ‡Ί

The European Commission is reportedly considering pausing enforcement of its flagship AI Act amid mounting industry backlash and implementation chaos, marking a stunning reversal for what was hailed as the “international gold standard” for AI regulation. The comprehensive framework, which began taking effect in February 2025, has faced withering criticism for being too burdensome, unclear, and rushed – with key guidance documents and technical standards running months behind schedule. Critical implementation tools like the General-Purpose AI Code of Practice, originally due in May 2025, have been delayed until at least August, leaving businesses scrambling to comply with requirements that remain fundamentally unclear.

The proposed pause would delay enforcement until technical standards are developed, expand exemptions for smaller companies, and introduce waivers for low-complexity AI systems – essentially admitting the regulation might not be ready for prime time. This retreat comes amid intense pressure from both European industry and the Trump administration, which has explicitly framed EU AI rules as unfairly targeting American firms. For universities watching regulatory developments, this chaos underscores the danger of premature regulatory frameworks that promise certainty but deliver confusion, potentially creating more compliance burden than actual protection for students and institutions.

AI Skills Premium Explodes: Workers Command 56% Higher Wages as Industries Race to Adopt | Value Creation Surge πŸ’°

PwC’s Global AI Jobs Barometer analysed nearly a billion job ads, revealing that AI is creating a massive wage premium for skilled workers rather than the widespread job destruction many predicted (press release here and full report here). Workers with AI skills now command 56% higher wages compared to their peers in identical roles without those skills – more than double last year’s 25% premium. Industries most exposed to AI are experiencing revenue growth per employee that’s three times higher than less AI-exposed sectors, with wages rising twice as fast even in highly automatable roles. The skills transformation is accelerating dramatically, with AI-exposed jobs requiring new capabilities 66% faster than other roles – up from just 25% faster last year.

For HE, this data represents both validation and urgency around AI literacy programmes. Students graduating without AI competencies aren’t just missing nice-to-have skills – they’re facing a potential 56% wage penalty in an increasingly AI-driven economy. The acceleration in skill requirements suggests that academic programmes need to consider embedding AI literacy throughout curricula rather than treating it as an optional add-on. With every industry now paying premiums for AI skills and the transformation happening faster each year, institutions that delay comprehensive AI integration risk sending graduates into a job market where their degrees carry significantly less economic value.


So, after that whirlwind tour – from industry insiders and critics finally uniting against Big Tech’s power grab, to political chaos blocking any coherent regulatory response, while the EU’s “gold standard” crumbles and universities scramble to verify what’s actually human work – the massive question hangs in the air: Is HE genuinely adapting to this new reality, or are we just lurching between vendor promises and compliance deadlines? Melbourne is abandoning trust-based assessment for verification systems, whilst PwC data shows a 56% wage premium for AI skills – suggesting the economic train has already left the station. Are we truly preparing graduates for an AI-driven economy where their competencies determine their earning power, or are we still debating governance frameworks whilst students graduate into a job market that increasingly penalises AI illiteracy?

I’d love to hear your take – drop your thoughts, strategies, and any glimmers of sanity in the comments below.

Leave a comment