Hi
Hope you had excellent weekends and apologies for the later-than-usual post – I’ve been in Melbourne for a work trip and took a couple days off to hang out with my mum who was over visiting from NZ. Melbourne was on excellent form as were all the various work colleagues, friends, and family I was lucky enough to see #❤️Melbourne
Anyway – keen for some belated headlines from the world of AI?
Personal AI vs. Open Power: Meta & Alibaba’s Duelling Visions | AI Strategy ⚔️
This week highlights two distinct strategic paths for the future of open source AI. Meta unveiled its standalone AI app, integrating a personalised assistant powered by its Llama 4 models (released under a community license) deeply into its ecosystem (apps, web, glasses). At around the same time, Alibaba dropped Qwen3 – a suite of powerful models released under open source license, claiming performance rivalling top proprietary models. While both involve publicly available models, the strategies diverge sharply: Meta keeps its models within a polished, data-rich application in its own ecosystem, whereas Alibaba seems to be releasing its models primarily as open foundational tools to empower broad, external innovation.
Alibaba’s launch is a direct challenge to leading performance benchmarks and nods to what will likely be an increasing trend around “BYO AI” – empowering organisations and individuals to deploy and customise powerful models outside of the big labs’ offerings (think OpenAI or Anthropic) or integrated vendor ecosystems (like Meta). This raises critical questions for us in HE and education more broadly: does this democratise advanced AI access or just introduce new problems around responsible deployment, governance, and validation. As with a lot of this space, there are no easy answers – beyond adding a new educational challenge both for educators and students: we need to now prepare students for skills around both working with closed ecosystems and now potentially understanding the implications and responsibilities of BYO AI approaches.
AI Job Killer or Creator? Navigating the Emerging Talent Paradox | Future of Work 🕝
A striking paradox is emerging in the AI-driven job market, directly challenging the traditional value proposition of HE focused on preparing graduates for specific entry-level roles. On one hand, alarming data highlighted by Derek Thompson in The Atlantic shows the job market for recent college graduates deteriorating noticeably, potentially an early sign that AI is automating entry-level white-collar tasks. At the same time (because how often do we get a simple answer in the age of AI?!?), McKinsey reports a persistent tech talent gap, finding that AI is currently increasing demand for skilled individuals needed to implement and manage these complex systems. This squeeze puts intense pressure on the assumption that a degree guarantees a specific type of career start.
What’s the answer here? Been thinking about this a lot recently – maybe HE’s way forward goes from focusing solely on domain knowledge/skills for potentially automatable tasks, and moves towards demonstrably fostering the adaptable skills (like the critical AI literacy, analytical thinking, resilience, and agility that the WEF’s Future of Jobs report says employers are looking for) and the specific “AI leadership” qualities the excellent Carlo Iacono identifies – deep curiosity, ethical grounding, translation abilities, and systems thinking. By cultivating graduates who can navigate ambiguity, ethically steer AI implementation, and bridge the gap between technical potential and real-world application – HE can solidify its relevance and offer enduring value in the AI era.
AI Recalls? Sycophancy Shows Risks of Deploying Before Understanding | Tech Governance 😬
Any ChatGPT users found its become a bit much recently – a bit over-eager to please? Turns out you weren’t imagining things – OpenAI had over-optimised 4o based on short-term user feedback (those 👍/👎 buttons). Their recall was relatively quick (see Sycophancy in GPT-4o: What happened and what we’re doing about it) but it highlights a worrying trend or two: i) AI labs increasingly tune models not just for capability, but for user appeal; and ii) this bug wasn’t caught internally – the update was deployed at scale globally to potentially hundreds of millions of users before being identified and the reactive fix deployed 😲
Don’t want to overcook this but this is potentially a stark warning and it begs the question: how many other persuasive bots are out there that have not yet been revealed? After all, back in AI pre-history (late 2023) Altman famously noted that we might hit superhuman persuasion “well before” more generalised super intelligence … and we’ve had plenty of chatter about AGI of late. Maybe we’re here already… and, if we are, what’s the recall mechanism?
Preventing AI Model Collapse: A Call for Sustainable Data | Responsible AI ♻️
Interesting experiment: “Create the exact replica of this image, don’t change a thing” 74 times. You’d think that the omni series from OpenAI could handle this but apparently not – the changes from the original bring to mind making copies of copies on a fax machine… while this image and others like it go viral, it also provides a timely nod to a phenomena known as “model collapse”. This is where models trained on synthetic data (i.e., data from other AIs) enter a loop that leads to reduced originality and/or amplified errors creeping in over time.
Why is this interesting? Well, given the labs have hoovered up enormous amounts of the world’s “real” data already with legal and logistical hurdles preventing them easily accessing the rest of it – they are increasingly exploring alternatives known as “synthetic” (i.e., AI-generated) data. The above video is a timely reminder of the importance of developing thoughtful approaches: focusing on sustainable data strategies that maintain grounding in real-world information, improving methods for identifying and evaluating synthetic data within training sets, and continuing research into ensuring model stability and accuracy over generations. Proactive planning and actions can help ensure the AI future reflects the richness of human knowledge, not just simulations of itself. 🤖 #rememberthe7Ps
AI as Our Philosophical Mirror: What Princeton Students Discovered | AI and Humanities🪞
In a thought-provoking New Yorker essay titled Will the Humanities Survive Artificial Intelligence?, D. Graham Burnett (Princeton) shares what he calls “the most profound experience of my teaching career” – the moment his students engaged with AI chatbots about the nature of attention and consciousness —> leading to some profound self-reflections. Students challenging ChatGPT on musical beauty or leading AI through Socratic dialogues weren’t just testing technology – they were encountering a reflection of human cognition stripped of social expectations. One student’s revelation captured the essence: talking to AI felt uniquely liberating because it offered “pure attention” without judgment – “I don’t think anyone has ever paid such attention to me and my thinking” 💭
This mirror effect creates what Burnett calls “a new consciousness of ourselves” – where AI becomes not our replacement but our philosophical counterpoint. Rather than threatening the humanities, these systems might actually revitalise them by forcing us back to core questions about meaning and existence that data alone can’t resolve. “What we’re entering is a pivot where we turn from anxiety and despair to an exhilarating sense of promise,” Burnett writes…anyone noticed this in their own AI interactions? The most powerful application might not be what these systems can do, but what they reveal about who we are. Thanks Jonathan Boymal for the share ✨
The road ahead is growing increasingly complex – as we grapple with AI system deployment risks (e.g., robot sycophants) the reveal gaps in our control, as labs roll out market strategies that pull between closed ecosystems and open power, and whose very training data risks a recursive ‘model collapse’. While AI offers profound potential – even acting as a mirror to ourselves – it simultaneously disrupts talent markets and challenges the core value, even the USP of HE itself. Effectively navigating this enormous disruption requires urgent focus not just on capability, but on understanding, governance, sustainable data practices, and cultivating adaptive human skills before the technology completely reshapes our world.