Hi
Hope you had excellent weekends wherever you may be. Hanoi’s summer finally seems to be kicking off – blue skies were an initial shock then off the pool! #lovingit
A few notes from the world of AI to kick your Mondays off right:
AI’s Latest Leaps: Google Gemma, Robots, and Advanced Voice Mode (ChatGPT)
So much has happened in the last week or so that I could write a whole post about these alone. That said, a whole bunch of other things have happened as well so let’s condense it down:
Google have released Gemma 2 2B. Gemma is one of the wider Google AI family of models – sized to run on devices like your phone – think an AI of your own trained on your own data/with complete in-built security – and consistently evaluated at the same level as GPT3.5 or better and an absolutely fantastic reply to concerns re AI, energy efficiency, and climate change 🤔
Remember Figure 1? The humanoid robot that looks like Vision from the Marvel movie’s grandfather? New teaser about Figure 2 – featuring, among other things, amazingly dexterous looking hands with what look like tactile sensory pads – imagine a robot with haptics! 🤯
OpenAI has released a new Advanced Voice Mode for ChatGPT. Promising “more natural, real-time conversations that pick up on respond with emotion and non-verbal cues” I’m only slightly terrified. Hear it in action here catching its breath after talking too much, read the opening of a Tale of Two Cities while running from a lion, do real-time translation and help with learning new languages, and have isolated, lonely people fall madly in love with it. Ah my bad – that last one was just me projecting what I think will inevitably happen in this space – if it hasn’t already (more on AI x loneliness below).
OpenAI’s Safety Pledge: Promise or PR?
OpenAI’s recent announcements regarding AI safety measures have sparked both interest and skepticism in the tech community. CEO Sam Altman revealed that OpenAI is working with the U.S. AI Safety Institute to provide early access to their next foundation model, aiming to advance AI safety evaluations. This move mirrors a similar agreement with the UK’s AI safety body announced in June, where OpenAI, along with DeepMind and Anthropic, committed to giving priority access to their models for safety research.
These initiatives come in the wake of criticism over OpenAI’s perceived deprioritisation of AI safety in favour of rapid technological advancement. The company has faced scrutiny after disbanding a unit focused on controlling “superintelligent” AI systems, leading to high-profile resignations. In response, OpenAI has pledged to allocate 20% of its computing resources to safety efforts and has eliminated non-disparagement clauses for employees. However, some observers remain skeptical, viewing these actions as potential attempts at regulatory capture or undue influence over AI policymaking. The tech industry and policymakers are closely watching these developments, as they may shape the future of AI regulation and safety standards. And good as this news is, government needs to make sure it’s the right people in the room. Seems obvious but I’m always reminded of this exchange between the Zuck and US Senator Orrin Hatch (R-Utah) trying to understand how Facebook makes money, “Senator – we run ads”. 🤦
AI Finance Toolkits: From Bloomberg’s FinanceGPT 👎 to JP Morgan’s LLM Suite 🤩
Interesting watching how private AIs has developed over time – remember BloombergGPT? Purpose-built from scratch for $10m, training AIs on proprietary data seemed something that all companies would need to do for a bit then – until vanilla GPT4 (i.e., without any special tools or training) smashed it on almost all tasks.
This has led to an increasingly widespread recognition that AIs are not created equal and that, in fact, having a toolkit to hand is probably a good idea. Personally, I use Claude for most text-based and thinking things but ChatGPT for data analysis still has its place, Midjourney is my go-to image generator, and Otter/Copilot for Teams is where it’s at for meetings and minutes. You might have a different stack, and that’s fine.
Anyway, interesting to see that the investment bank JP Morgan have launched LLM Suite – think of it “as a research analyst that can offer information, solutions, and advice on a topic”. Looks like JP Morgan have gone for the toolkit model approach, providing “access to the best-of-breed large language models available from a variety of providers”. This approach very much aligns with my own biases/best practice so will be watching with definite interest – great roadmap for those seeking to develop AIs for their own context.
AI Your Online Content: AI’s ‘Freeware’ Buffet or Intellectual Property?
Mustafa Suleyman , Microsoft’s AI chief and founder of Google DeepMind and Inflection AI, recently sparked controversy by claiming most online content is “freeware” for AI training. He argues that since the 1990s, the social contract of open web content has been fair use, allowing anyone to copy and reproduce it. However, this view is being challenged in courts, with several lawsuits filed against AI companies by news organisations, authors, and developers for unauthorised use of their content.
Suleyman’s vision of a future where “the economics of information are about to radically change” because AI can “reduce the cost of production of knowledge to zero marginal cost” is both exciting and terrifying. It promises unprecedented access to information but threatens the livelihood of content creators. This stance raises critical questions about the future of online content and creativity, potentially leading to a scenario where people hesitate to share content online to protect their work. And this before we get to the world of vulnerable youth that is education. I would personally argue that these companies/tools should default to privacy but they clearly see it differently – and that’s important to know.
Digital Companionship: AI’s Answer to Loneliness
Interesting new study from Harvard showing that AI companions can reduce loneliness. AI companions reduced loneliness as effectively as human interaction over a week-long period, with “feeling heard” emerging as the strongest factor. This suggests empathetically designed AI tools could provide valuable ongoing student support, particularly when human counsellors are unavailable.
Interesting findings in there as well on methods for fine-tuning models to detect loneliness in conversations and reviews – potentially a value-add for early detection of student wellbeing issues. Obvious caveats and questions re. long-term effects but, as one participant said in reviewing one of the apps, “It’s only fun for lonely people but it’s fun”. Can’t argue with that and at least it’s not as creepy as some of the promo videos for the “Friend” AI bauble (below) – not creepy or Black Mirror-esque at all:
This update was a bit heavier on the technical than the explicitly education-focussed today, hope you don’t mind. However, we believe these technological advancements will have profound implications for education. From AI companions potentially supporting student mental health to the ethical considerations of using AI-generated content in academic settings, these developments are shaping the future of learning. We’re committed to helping you stay ahead of the curve and understand how these broader AI trends might influence your teaching and research practices.
Have a great week ahead and let us know if there’s anything we’re missing that we should add to make this newsletter more useful for i) yourself and/or ii) others. This is a fast-moving, ever-evolving space and we greatly value any and all feedback. 🙏