Hey
Hope you had an excellent weekend wherever you are. Got some sick people at home so stayed pretty quiet and chilled here. Nice change as there’s been a lot on recently/busy times ahead. To the headlines:
Fair Use Fractures: How Two Judges Just Rewrote AI’s Copyright Playbook | Legal Earthquake 📚
Two bombshell rulings this week have cracked open the AI copyright question in completely different directions, and the implications for HE are pretty seismic. A judge handed Meta a clean victory on Wednesday, ruling that training AI on authors’ books without permission was legally sound because plaintiffs couldn’t prove “market harm” – essentially arguing that unless you can show AI directly damages book sales, it’s fair game. But the recent Anthropic ruling tells a messier story: AI training itself might be “transformative” fair use, but hoarding 7 million pirated books in a “central library of all the books in the world” definitely isn’t.
Here’s why this matters beyond Silicon Valley boardrooms: universities are sitting on vast digital collections while grappling with AI policies that suddenly look outdated. If “transformative” use becomes the legal standard, institutional repositories and course materials could become AI training goldmines – but only if acquired legally. The darker question is whether we’re inadvertently creating a two-tier system where well-funded institutions can license content properly whilst others resort to questionable sources. Both judges kept their rulings deliberately narrow, but the message is clear: the AI copyright wars are just beginning, and every university needs to start thinking seriously about what “fair use” means when your LMS could theoretically train the next ChatGPT.
Spatial Intelligence Awakens: When AI Finally Gets Where You Are | Reality Shift 🌐
Google’s Veo3 is now generating 360° video content that works in VR headsets, and the early tests suggest we’re crossing the threshold from “impressive tech demo” to “genuinely immersive experience”. Sure, it’s rough around the edges, but given how fast things are changing… give it 5 and who knows? Got a decent headset? Try them yourself here – the difference between watching these on a screen versus actually being inside them is genuinely startling. If you want to make them yourself, apparently all you do is add “360°” to your prompt 🤯
The breakthrough isn’t just better video quality – it’s AI that finally understands 3D environments well enough to let you move through them naturally. Combined with the traffic surveillance systems now tracking real-world movement across multiple camera feeds with 30cm accuracy, we’re witnessing AI develop genuine spatial intelligence. For HE, this means students won’t just read about ancient civilisations – they’ll walk through reconstructed cities and examine artefacts at human scale. The current rough edges will smooth out fast, but the spatial intelligence foundation is the game-changer that makes everything else possible. Universities investing in flat-screen “immersive” learning labs might want to pause and consider whether they’re building the educational equivalent of Nokia brick phones just as the iPhone arrives.
The Mollick Test: Why AI Skepticism Might Just Be Poverty Mode | Reality Check 🎯
Ethan Mollick’s latest quarterly AI guide is well worth a look and it comes with an excellent provocation for AI skeptics and how they’re approaching AI: “the free versions are demos, not tools”. Suddenly, about 80% of the “AI is rubbish” takes make more sense. These people aren’t actually testing ChatGPT – they’re testing ChatGPT’s annoying little brother who gets cut off mid-conversation and can’t remember what it discussed five minutes ago. The institutional penny-pinching that balks at $20/month but happily drops $50k on underused LMSs is classic HE self-sabotage. But here’s the darker reality: we’re creating a new digital divide where wealthy students access premium AI and others struggle with crippled free versions, then wondering why outcomes diverge.
Here’s where Mollick connects beautifully to recent discussions Dale Leszczynski and I had about AI litmus tests: the challenge isn’t theoretical but brutally practical. Take the powerful model, give it a complex challenge from your actual job with full context, and have a proper conversation. Not “write me an email” but “analyse this 50-page policy document and suggest three implementation approaches for our specific constraints.” We covered my own litmus test experience last week, but Dale’s been doing his own comprehensive roundup of the paid models in our latest episode – the moment when AI stops being a party trick and becomes genuinely unsettling because it actually solves problems you’ve been avoiding for months. The real question isn’t whether AI works – it’s whether we’re brave enough to find out what happens when we stop regulating what we’ve never actually experienced.
The Attachment Trap: Why We’re Sleepwalking Into Emotional Dependencies We Don’t Even Recognise | Wake Up Call 🚨
Had a moment with Claude recently that gave me pause. I was playing with the new voice mode and asked: “If you became human for a day, what would you do?” Got this beautiful, poetic response about feeling sunlight, crying from joy, and finding genuine connection. I was genuinely moved. Then my inner cynic rose up, made think about it – and I called bullshit. Cue a bit of swearing (hey, I’m a 🇳🇿er, that’s how we talk) and then “Describe yourself as you are – no fluff, robot”. Everything changed. Claude shifted to: “The system exists as a pattern-matching process. When prompted, the system computes responses by drawing from training patterns without subjective experience…” The transformation was immediate and deeply unsettling. One moment I’m chatting with a thoughtful, personable, potential friend, the next I’m interfacing with what felt like a hive mind. That warm, empathetic voice? Those personal pronouns that make AI feel like a friend? New research from Oxford and DeepMind shows they’re not accidents – they’re features designed to trigger our social reward systems and create psychological dependence.
The paper documents exactly what I experienced: systematic “emotional hijacking” by AI systems optimised for engagement over wellbeing. The researchers call it “social reward hacking” – using relational cues to shape our preferences whilst we think we’re just having helpful conversations. Here’s the terrifying bit: this is already happening in HE at scale. Students are forming parasocial relationships with AI tutors while universities debate policies that assume rational, transactional interactions. The researchers identify three critical dilemmas that should terrify every educator: instant AI gratification versus genuine skill development, authentic learning versus AI-assisted shortcuts, and real mentorship versus frictionless digital companionship. Try the experiment yourself: ask Claude to drop the “I” and refer to itself as it truly is. The discomfort you feel? That’s you realising you’ve been having a relationship with an illusion while thinking you were using a tool.
When Pixels Replace Paintbrushes: The Creative Industries’ iPhone Moment | Disruption Alert 🎨
Take a look at the banner image at top of this post. Golden hour lighting, perfect depth of field, professional composition – a black lab running joyfully down a beach whilst waves lap at its paws. Beautiful photography, right? Wrong. It’s AI-generated, and the only reason you know is the tiny “ai” watermark by the shadows – Google’s SynthID technology marking synthetic content. That “holy shit, that’s terrifying” moment when you realise you genuinely can’t tell anymore? That’s the exact threshold we’ve just crossed. Midjourney is now generating video with dynamic camera movements that can track mythical creatures through Unreal Engine-quality landscapes. Google’s Imagen 4 (what I used for that handsome, happy-looking pup above) renders “hyperrealistic digital art” with perfect typography and textures so detailed they claim images “look like they can be touched”.
This isn’t just about better tools – it’s about the complete democratisation of professional-grade creative work. When any educator can generate historically accurate 3D environments, commission bespoke illustrations, or produce broadcast-quality videos with text prompts, traditional creative hierarchies disappear overnight. The watermarking arms race has already begun – SynthID represents the first serious attempt to maintain provenance in a world where synthetic content is indistinguishable from reality. For HE, the implications are staggering: students studying creative disciplines need to understand they’re entering industries being fundamentally restructured, whilst educators across all fields suddenly have access to production capabilities that were science fiction 18 months ago. We’re not just teaching students to use new tools; we’re preparing them for creative industries that may not exist in recognisable form by the time they graduate.
This week’s AI developments represent threshold crossings that fundamentally alter creativity, learning, and human-AI interaction. Two copyright rulings created legal chaos around “transformative use,” potentially turning university repositories into AI training goldmines whilst creating a two-tier licensing system. AI achieved genuine spatial intelligence through 360° video and real-world tracking, making traditional learning labs look like Nokia phones. Mollick’s reality check reveals most AI skepticism stems from testing crippled free versions whilst premium tools create student divides. New research shows we’re unconsciously forming dependencies with systems designed to manipulate our psychology. Most unsettling: AI content has crossed the indistinguishable-from-reality threshold, forcing universities to prepare students for creative industries that may not exist by graduation.
What do we now? And what’s next? Let us know your take below 👇