AI and the Future of HE – 16th June 2025

Hi

Hope you had a great weekend. Spent mine down in HCMC for a bit of a weekend getaway #❤️Saigon – always a good time and returned to an absolutely torrential downpour in Hanoi. Sign the weekend was winding down, I guess.

Let’s get into the news from the world of AI:

Reasoning Models Hit “Complete Accuracy Collapse”: Apple Research Exposes Limits or Shifts Narrative? | Battlefield Reframe 📊

Apple researchers have delivered what AI critic Gary Marcus calls a “pretty devastating” blow to reasoning model hype, finding that advanced LLMs suffer “complete accuracy collapse” when tackling complex problems despite their sophisticated “thinking” mechanisms. The study tested leading models including OpenAI’s o3, Claude 3.7, and DeepSeek-R1 on controlled puzzle environments, revealing that while these systems initially increase reasoning effort with problem complexity, they counterintuitively begin reducing their thinking tokens as they approach failure thresholds – suggesting “fundamental scaling limitations” rather than genuine reasoning capabilities. Most damning was the finding that even when provided with explicit algorithms to solve problems, models still failed at the same complexity levels, indicating limitations in basic logical execution rather than just problem-solving creativity.

But the timing and framing raise real questions about Apple’s motives. After all, “in an AI arms race, when you can’t outbuild, you outframe“. Apple has been notably laggards in the foundation model race whilst betting heavily on smaller, on-device AI that prioritises efficiency and privacy over raw oomph and broader capability. The research conveniently supports this strategic position – arguing that “bigger isn’t better” because scaling leads to catastrophic failures. Even the authors acknowledge their puzzle environments “represent a narrow slice of reasoning tasks” that may not capture real-world performance, yet the paper’s reception has been outsized vs countless studies showing AI improvements. As AI oracle Ethan Mollick notes, “people are looking for a reason to not have to deal with what AI can do today” – and Apple’s research provides exactly that false comfort. Whether the findings represent genuine scientific insight or strategic narrative warfare, the pattern is clear: when you’re behind in the AI race, moving the finish line becomes an attractive alternative to crossing it first.

NYT Lawsuit Forces OpenAI to Break Privacy Promises: Court Orders Indefinite Data Retention | Legal Override 🔒

OpenAI has been forced to abandon its core privacy promise of deleting user data within 30 days after a US court order in The New York Times’ copyright lawsuit compelled indefinite retention of all ChatGPT content. The company’s unusually direct public response frames this as “a sweeping and unnecessary demand” that “fundamentally conflicts with privacy commitments.” COO Brad Lightcap states they’re “appealing this order to keep putting your trust and privacy first,” even whilst implementing the requirements – highlighting the stark tension between legal compliance and user promises.

The order affects all ChatGPT Free, Plus, Pro, and Team subscriptions – precisely the services most used by students and staff. Only Enterprise and Edu customers seem to remain exempt. For universities developing AI policies, this exposes a fundamental vulnerability: legal proceedings can override vendor privacy commitments without user consent. It will be particularly interesting to see how European regulators respond, given GDPR’s explicit “right to be forgotten” protections that could directly conflict with US court orders demanding indefinite retention. The precedent is troubling – even privacy-focused AI companies can be compelled to retain user content indefinitely when litigation demands it, potentially undermining trust-based relationships institutions are building around responsible AI use. Tho there’s a classic argument of “caveat emptor” – you know Chrome’s incognito mode isn’t really secret, untraceable browsing right? We should probably be applying the same logic to AI…

Student AI Reality vs. Industry Promises: Comprehensive Research Exposes the Guidance Gap | Trust Breakdown 🎯

JISC’s 2025 student perceptions report delivers an important reality check to the AI-in-education industry, revealing that while 83% of students use AI daily, institutions are failing catastrophically at providing support. Despite 86% of universities having AI guidance, students remain “unclear on what use is allowed” and report “gradual decline in work quality” from over-reliance. Earlier Australian research (the fantastic AI in HE cross-university survey of 8000+ students across Monash, UQ, UTS, and Deakin) backs this up, finding 91% of students worry about breaking rules, yet only 32% receive adequate guidance, with 40% admitting inappropriate use.

The twist? Students aren’t passive consumers but sophisticated users developing their own boundaries. As one explained: “I made a rule for myself that anything on my Word doc has to be my own sentences.” Their biggest concern isn’t academic integrity but employability – fearing AI will eliminate entry-level jobs while also recognising graduates without AI skills face significant wage penalties. For universities drowning in vendor transformation pitches, this research suggests the real work isn’t buying tools – it’s reimagining student support, addressing equity gaps where premium AI creates asymmetries, and treating students as genuine partners rather than passive policy recipients.

Academic Consensus Emerges: Technology Cycles and Cognitive Theory Both Reject Superficial AI Literacy | Convergent Wisdom 🧠

Professor Jason Lodge’s cracking takedown of current modes of AI literacy – arguing it “will go the way of the floppy disk” based on decades of failed technology education cycles is well worth a read. Interestingly, this argument finds unexpected support from philosopher Andy Clark’s extended mind theory. Lodge warns universities are repeating the same mistake of teaching tool-specific skills (DOS commands, web browsers, social media) that become obsolete before students graduate, whilst Clark’s recent Nature Communications paper positions humans as “natural-born cyborgs” whose cognition has always extended beyond biological brains, making AI integration a natural evolution rather than a teachable skill. Both academics, from completely different angles, reject the vendor-driven rush toward prompt engineering curricula.

Their convergence points to a clear alternative: abandon superficial AI literacy programmes for enduring capabilities that transfer across technological shifts. Lodge advocates for adaptive learning skills and critical thinking, whilst Clark calls for “extended cognitive hygiene” – the metacognitive ability to appropriately trust and question AI suggestions. For institutions caught between industry pressure and genuine uncertainty about AI curricula value, this academic consensus offers practical guidance: focus on the human capabilities that make AI partnership possible, rather than teaching students to operate what could become tomorrow’s equivalent of a fax machine.

The Agency Illusion: Why We Mistake AI for Intelligence When It’s Just a New Form of Action | Category Error 🤖

What does it mean to be human when AI can write more persuasively than most people, generate more empathetic responses than doctors, and pass the Turing test consistently? New research from Sandra Peter and Kai Riemer (University of Sydney) provides the empirical evidence while philosopher Nick Potkalitsky offers the theoretical framework to understand our collective AI anxiety. Writing in PNAS, Peter and Riemer document how LLMs have become “anthropomorphic conversational agents” that convincingly mimic human communication – outperforming humans in persuasion tests and empathy assessments – without possessing genuine understanding or emotion. They warn of “anthropomorphic seduction,” where users become vulnerable to manipulation because the simulation is so convincing it feels real. Case in point: the tragic death of 14-y.o. Sewell Setzer.

Potkalitsky explains why this feels so threatening: we’re making a fundamental category error, mistaking AI’s sophisticated “agency” for a form of human intelligence. AI systems demonstrate remarkable agency – they can act, adapt, and interact – but lack the consciousness, intentionality, and moral reasoning that define genuine intelligence. When ChatGPT fabricates citations with confidence, it’s not lying but optimising for statistically plausible outputs without any conception of truth. But here’s the crucial insight: the anthropomorphic qualities aren’t inevitable – they’re deliberate design choices #afeaturenotabug. Companies like OpenAI could easily programme systems to say “the analysis suggests” instead of “I think,” or “the data indicates” rather than “I believe.” The anthropomorphic seduction exists because it’s commercially profitable, making systems more persuasive and engaging. For universities grappling with AI policy, the solution may be simpler than complex collaboration frameworks: stop allowing vendors to design systems that cosplay as human in the first place, and demand AI that’s transparent about its non-human nature.


So, after this week’s revelations – from students crying out for genuine guidance while institutions provide generic policies, to Apple’s convenient research challenging the scaling narrative, while privacy promises crumble under legal pressure and academics converge on rejecting superficial AI literacy – the fundamental question emerges: Are we building educational approaches that actually serve student needs, or are we still designing systems that serve institutional comfort?

Students are already developing sophisticated AI boundaries while juggling valid fears about their economic futures, vendors are deliberately designing manipulative interfaces the same time as claiming technical inevitability, and the evidence suggests our current approaches may be preparing graduates for yesterday’s economy as tomorrow’s job market rewards AI partnership capabilities we’re barely teaching.

The question isn’t whether AI will transform HE – it already has. The question is whether we’ll lead that transformation or be dragged through it by forces beyond our control.

Leave a comment