AI and the Future of HE – 11th March 2024

Hi

Hope you had excellent weekends and that your Monday is off to a great start.  I was playing host for visitors this weekend – always a pleasure and Hanoi did us a solid by easing up on the incessant rain and smog that we’ve been living through the last few weeks #moudlyMarch.  A few headlines from the world of AI to start your Mondays off with a bang:

And then there were three… Anthropic’s new Claude model > OpenAI’s GPT4 and Google’s Gemini Ultra

Anthropic has launched the Claude 3 model family and it is very impressive.  Multimodal out of the box, with the enormous context window that has made it a favourite for many different users (including yours truly), Anthropic’s own testing has it outperforming current leaders GPT4 and Gemini Ultra on pretty much everything including analysis, content creation, and multilingual conversation (see below) but, in the meantime, a review by the wonderfully effervescent Karoly Zsolnai-Feher here… what a time to be alive!

One note in the release (i.e., Claude “exhibits near-human levels of comprehension and fluency on complex tasks, leading the frontier of technical intelligence”) gets a little more interesting when you learn that Opus, the most powerful model, might even have some form of nascent awareness – or it’s just super fussy about prosciutto and goats cheese as a pizza topping. Wild stuff with flavours of Blake Lemoine (Google AI researcher put on administrative leave back in 2022 when he went public with claims that LaMDA – an early LLM – had gained sentience) but expert commentators like Jim Fan (NVIDIA) argue that the behaviour can be explained by pattern-matching from human-authored training data, emphasising the need for more rigorous evaluation frameworks and cautioning against reading too much into the model’s language outputs.

Claude 3 > GPT4 and Gemini Ultra on most metrics (Anthropic)

Sankey – Navigating the AI Revolution in Higher Education (AI x Academic Integrity)

In a thought-provoking presentation from the Academic Integrity week at Charles Darwin University, Professor Michael Sankey addresses the pressing challenges posed by the rise of generative AI tools like ChatGPT in HE.  Sankey emphasises the need for universities to adapt their assessment strategies to maintain academic integrity in the face of contract cheating, impersonation, and the increasing use of AI in the workplace.  He outlines key priorities for assessment reform, including reducing high-stakes exams, increasing formative assessment, and incorporating more authentic and multimodal assessments.

Sankey has a range of interesting ideas for making assessments more AI-proof – some you may have heard (e.g., personal reflections, focusing on recent events, conducting interactive oral assessments, having students critique AI-generated content, etc.) and some that might be new (or at least they were for me – Generative AI x ePortfolio for Programme-Wide Assessment).  Interesting stuff, and Sankey makes no apologies in pushing for universities to stay ahead of the curve and proactively preparing students for a future where AI is ubiquitous in the workplace. “If I was an employer and I knew generative AI would help my workers be more productive, would I want them to use it? Well, I think, of course, that’s a rhetorical question, isn’t it? Of course, we’d want them to use it”.

Developing Critical Thinking and Self-Regulation in an AI-Driven Education Landscape (McMinn)

In an interesting extension of this argument, Sean McMinn, Director at the Center for Education Innovation at HKUST, advocates for a holistic approach to AI in education that extends beyond academic integrity concerns. He argues for the importance of developing students’ AI literacy, critical thinking, and self-regulated learning skills to prevent over-reliance on AI as an external regulation tool.

McMinn proposes a practical exercise integrating dual process theory with a gaming analogy, challenging students to solve complex questions using ChatGPT while consciously applying metacognitive strategies. By reflecting on their thinking process and adjusting their approach based on AI feedback, students can cultivate the self-regulatory skills necessary to navigate an AI-enriched educational landscape and prepare for a future where AI tools are integral to problem-solving in professional and academic contexts.

Boost Your AI Skills with LinkedIn Learning’s Free Course Offerings

LinkedIn Learning is providing a unique opportunity to enhance your Artificial Intelligence (AI) skills with hundreds of free courses available until April 5th, 2023. The extensive collection caters to all levels of learners and covers various aspects of AI, including Machine Learning, Large Language Models (LLMs), and practical applications across industries. Notable offerings include “Introduction to Prompt Engineering for Generative AI,” “Using AI Tools for UX Design,” “Midjourney: Tips and Techniques for Creating Images,” and “Getting Hands-on with GPT-4: Tips and Tricks,” all taught by industry experts and providing hands-on practice to develop real-world skills.

Enjoy!  And if you find any good ones, please do report back/let us know here!

Baby meets AI for the first time… AI knows about his favourite movie, ice-cream, and tries to sell him snacks

Concerningly-named AI start-up Replikant (how can you be in the tech space and not have seen Bladerunner?!?) “is a 3D animation platform designed to simplify the complexities of 3D setup, allowing you to focus more on your creativity”.  In ~5m, Luc Schurgers sets up Crunchy McGrummels (pic below) – a Pixar-style avatar that he customises to his infant son’s spec and connects to a chatbot for real-time interactions.

Crunchy McGrummels in builders mode (Replika)

Available via the Epic Games Store here, the use-cases are wild – brand mascot, personal friend, advisor, teacher – I’m going to go with loveable storyteller in the context but wow…. incredible stuff and yet another powerful example of how fast things are moving on so many complimentary/simultaneous fronts in the AI space.


We hope this edition of the newsletter has been of interest to you. If you’re new here and it’s been useful, please do click subscribe and you can expect a weekly update every Monday from now on. If you’re already a subscriber – thanks for your ongoing interest and support! Either way, if you know others who might benefit from reading this weekly, please forward it on to them too.

Have a great week ahead and let us know if there’s anything we’re missing that we should add to make this newsletter more useful for i) yourself and/or ii) others. This is a fast-moving, ever-evolving space and we greatly value any and all feedback. 🙏

Leave a comment