AI and the Future of HE – 3rd June 2024

Hi

Hope you had excellent weekends!  The weather in Hanoi still can’t make up it’s mind whether it’s full-blown 40°+ summer or not and, while I’m generally okay with that, a little less rain please ⛈️ Anyway, Monday’s here now so let’s kick it off with a few headlines from the world of AI:

Musk vs. LeCun: The AI Ego Clash of the Century

You may have seen recently that Elon Musk has doubled down on his … bold claim that artificial general intelligence (AGI) surpassing human capabilities will arrive as soon as 2025.

This drew a scathing response from AI pioneer/Meta boss Yann LeCun, who has been prolific in publishing groundbreaking research and took this position pretty personally. LeCun fired the first salvo, mocking Musk’s fledgling AI startup xAI as a place of empty “next year” AGI promises amid “crazy-ass conspiracy theories,” while Musk clapped back, dismissing LeCun’s volume of recent papers (80+ since 2022 by the way) as “nothing” and suggesting the AI godfather has “gone soft”.

The two titans proceeded to question each other’s understanding of science and trade insults about who’s really innovating versus following orders – culminating in a back-and-forth on just what is science?

But this feud is more of a sideshow compared to what’s playing out across the industry. From Stability AI’s financial woes and leadership deserting the supposedly “open” AI mission, to Google’s AI nearly poisoning users with glue pizza recipes, to Meta’s typically nefarious practices of opting people into data harvesting by default to fuel its own AI ambitions – the examples of technology getting ahead of ethics are piling up.

The examples are at once hilarious and terrifying when you realise the immense power and potential of AI inevitably falling into the wrong hands…

OpenAI’s Dirty Secrets: Ex-Board Member Spills the Beans

Last year’s brief ousting of OpenAI CEO Sam Altman has taken on new significance after a former board member has spoken out on what was happening behind the scenes. Helen Toner, one of four members who voted to fire Altman in November before his swift reinstatement a week later, has revealed a pattern of Altman withholding key information, misrepresenting safety processes, and outright lying to the board over multiple years.

From not informing the board about ChatGPT’s launch to obscuring his personal financial interests, to providing inaccurate details about AI safety protocols, Toner paints a picture of failed self-governance at a company working on transformative technology. Her concerns highlight the critical need for robust external regulation as AI capabilities rapidly advance across fields like finance, criminal justice, defence and beyond. As Toner warns, the risks of increasingly sophisticated AI systems being developed without safeguards are “a pretty wide range of potential harms.”

OpenAI Goes to University: Introducing ChatGPT Edu

As artificial intelligence continues its rapid advance, OpenAI is aiming to make powerful AI assistants more accessible in an arena that could benefit greatly – education. The company has launched ChatGPT Edu, an affordable offering that brings its latest GPT-4o language model and associated AI capabilities like data analysis, multimedia reasoning, and custom model building to universities and colleges.

The goal of ChatGPT Edu is to provide schools with an enterprise-grade AI tutor that can enhance learning experiences through personalised instruction, research assistance, grading support, and even tools for developing custom AI models tailored to specific curricula. Early pilots at major institutions like Columbia, Wharton, and Arizona State have explored innovative use cases from overdose prevention research to capstone project collaboration.  Open AI says they won’t be using chat logs, prompts, and other data to train OpenAI models.  This is a big deal – all the AI companies have been live to the potential of education and yet, they’re turning this one down?  Will be interesting to see what “robust privacy, security and administrative controls” look like in practice. 🤔

AI Teaching Assistants: Friend or Foe in the Classroom?

Another cracker from Monash’s 10-minute chats on Generative AI. Simon Buckingham Shum, Professor of Learning Informatics at the University of Technology Sydney, discusses a wide range of issues around AI’s role in education. Key messages centre on: 1) Students are asking for guidance on what to offload to AI and when. 2) AI bots should promote awkwardness and question assumptions, not just provide rote answers. 3) Generative AI literacy is contextual and varies across domains.

Over the course of the chat, Shum discusses how students are increasingly looking to offload cognitive work to AI writing assistants like ChatGPT, raising concerns about outsourcing too much thinking and understanding. While recognising AI’s potential to enhance learning experiences, he advocates for a “developmental model” for responsible adoption. This means introducing AI assistants incrementally alongside scaffolding to build critical “Gen AI literacy” skills. The aim is judicious human-AI collaboration, not wholesale automation. Educators need to foster students’ ability to understand the constraints of these tools and retain agency over their learning process.

Oh – and in a related vein: if you like podcasts or NPR-type talks, Google Illuminate might be worth a look.

The strapline “turn academic papers into AI-generated audio discussions” is certainly appealing given the volume of content coming out these days – if I can turn my runs or commutes into reading time, that’d be awesome!

ElevenLabs Turns Up the Volume on AI with Text-to-Sound

I have this video I use a lot when I want to “wow” people about AI – this one of Sora (OpenAI’s text-to-video generator) with ElevenLabs (a voice to sound generator) blended together:

When this video was released 3 months ago, ElevenLabs only did voice but promised sound effects were “coming soon”.  Welp, now they’re here – ElevenLabs Sound Effects generator is now public with downloads.  While not as impressive as Sora out of the box, when I look at Sora video samples without sound (e.g., here) – as spectacular as they are, I’m inclined to agree with the ElevenLabs crew that something’s been missing… but now text to sound is here. 😍


We hope this edition of the newsletter has been of interest to you. If you’re new here and it’s been useful, please do click subscribe and you can expect a weekly update every Monday from now on. If you’re already a subscriber – thanks for your ongoing interest and support! Either way, if you know others who might benefit from reading this weekly, please forward it on to them too.

Have a great week ahead and let us know if there’s anything we’re missing that we should add to make this newsletter more useful for i) yourself and/or ii) others. This is a fast-moving, ever-evolving space and we greatly value any and all feedback. 🙏

Leave a comment