AI and the Future of HE – 18th November 2024

Hi

Hope you had wonderful weekends one and all. Hanoi continues to smash weather records with sun, sun, and more sun so I spent most of mine alternating between i) riding the bike along rivers and around lakes and ii) chilling on our new rooftop hammock #lovingit

Anyway, enough of that – a few notes from the world of AI to kickstart your Mondays:

πŸ€– Agentic AI Goes Mainstream: Game-Changing or Game-Over?

Boom! The AI world just hit another watershed moment – OpenAI, Anthropic , and Google are all racing to launch autonomous AI agents that can actually control computers and get stuff done. OpenAI’s “Operator” is set to drop in January, hot on the heels of Anthropic’s Claude (which can now use computers like a human 🀯) and Google’s upcoming December release.

This isn’t just another “bigger model” play – we’re talking AI that can navigate interfaces, browse the web, make purchases, analyse data, and basically execute any computer task you throw at it. McKinsey’s prediction that 30% of work hours could be automated by 2030? This is perhaps the beginning of that. After all, when you can tell an AI “research and book my next business trip” or “analyse last quarter’s sales data and prep a board presentation,” and it actually does it… that’s both fantastic and slightly terrifying. For organisations, it’s potentially game-changing productivity. For workers? Well, that’s the trillion-dollar question. Because any computer task that can be automated… probably will be 🎯

🧠 GPT-4’s Still King: AI Labs Hit the Scaling Wall

Here’s a thought-provoking reality check: GPT-4 is approaching its second birthday, and despite massive efforts, no one’s managed to clearly surpass it. The reason? AI labs are hitting hard limits – training runs costing tens of millions are failing, they’re running out of quality training data, and power consumption is becoming unsustainable. But rather than a cause for concern, this plateau might signal something fascinating πŸ€”

The shift comes as Ilya Sutskever (the scaling guru behind ChatGPT/ Safe Superintelligence Inc. boss) declares we’re leaving the “age of scaling” for an “age of wonder and discovery.” OpenAI’s latest model (known variously as o1, Q*, or Strawberry) suggests a new direction: “test-time compute” – giving models more time to reason during use rather than just making them bigger during training. When 20 seconds of “thinking” matches what would have taken 100,000x more training, maybe we should be relieved the future isn’t just about building ever-larger models πŸ“Š

πŸ”§ Anthropic Drops Major Prompt Engineering Upgrade

Anthropic just rolled out a game-changing set of tools in their developer console that could make prompt engineering way less painful. The headline feature? An AI-powered prompt improver that automatically beefs up your prompts with chain-of-thought reasoning, standardised examples, and structured outputs. Early testing shows some serious gains – Claude that actually hits your target word count!?! 😍

Bonus points: you can now manage and generate examples right in the workbench, plus test against ideal outputs to benchmark performance. For developers migrating to Claude or just trying to level up their prompt game, this is huge. Time to say goodbye to prompt engineering headaches? πŸ€”

🎯 Dylan William: Rethinking Education in the AI Era

It’s been a minute since we last had a Monash Generative AI chat but they’ve come through with an interesting one. UCL’s Dylan Wiliam cuts through the AI hype with some sobering data: AI automation now threatens nearly 60% of work requiring advanced degrees, challenging our assumptions about education’s protective value. More critically, he identifies a fundamental misalignment AI exposes in our education systems – our product-based assessment systems mean we’re heavily invested in improving the work/product rather than improving the learner.

He proposes three new core educational missions: personal empowerment, meaningful knowledge transfer, and democratic citizenship preparation. This isn’t just about changing assessment methods (though he’s a strong +1 for more oral exams and controlled conditions) – it’s about reconsidering what we value in education, especially as AI detection becomes increasingly futile and automated assessment grows more sophisticated 🎯

πŸ“š The Great Educational Unraveling: Beyond Knowledge to Knowing

Trust Carlo Iacono to ask the questions that shake our foundations. His latest truth bomb about AI and education is characteristically disruptive: we’re not just changing our tools – we’re being changed by them. As AI makes instant, sophisticated analysis available to anyone, the role of educators must shift from knowledge providers to what he calls “cognitive choreographers.” The real value isn’t in knowing stuff anymore; it’s in knowing how to orchestrate different forms of intelligence – human and artificial – into deeper understanding πŸ€”

And here’s the crux of it: this isn’t just about updating teaching methods. When AI can generate plausible answers to almost anything, certainty itself becomes a liability. The future belongs to those who can navigate uncertainty, question assumptions, and remain perpetually open to new understanding. Feels unsettling? That’s exactly the point. We’re not just witnessing this transformation – we’re part of it 🌊


Connect these dots and a clear picture emerges: AI is entering its next phase. It’s not about building bigger models anymore – it’s about building better ones. Ones that can think longer, act autonomously, and work alongside humans in more meaningful ways. But this shift brings both promise and challenge. As Dylan William and Carlo Iacono remind us, we’re not just retooling our systems – we’re rethinking what it means to know, to work, and to learn in an AI-augmented world. Buckle up – this next chapter’s going to be interesting! πŸš€βœ¨

Leave a comment