Nils Durner's Blog Ahas, Breadcrumbs, Coding Epiphanies

IBM Granite Models - for Agents?

Recently, I came across a LinkedIn post by Armand Ruiz, VP of Product - AI Platform at IBM, discussing the differences between chatbots, copilots, and agents. While the post provided a general overview of these AI categories, it prompted me to inquire about a more specific topic: IBM’s Granite models. Having recently participated in the NVIDIA ... Read more

Exploring Audio Input with Gemini 1.5 Pro

Simon Willison recently asked about experiments with audio input to Google Gemini 1.5 Pro and Flash models, noting that the ability to query audio files beyond simple transcription is an intriguing and potentially underexplored capability. Michael Gackstatter reported issues processing German audio with Gemini 1.5 Pro, receiving a “cannot proce... Read more

Benchmarking AI Vision

Ethan Mollick, Associate Professor at The Wharton School, recently shared two key developments: The Charxiv benchmark, a challenging real-life chart reading test, where humans achieve 80% accuracy. Interestingly, Claude 3.5, currently the best-performing Large Language Model (LLM) in this test, manages 60% accuracy. The Chatbo... Read more

Gemma-2: Impressive or Just Well-Dressed?

Google recently released their open-source Gemma-2 models (27b and 9b variants), which have been gaining attention in the AI community. In a LinkedIn post, Peter Gostev, Head of AI at Moonpig, highlighted that the 27b variant is now ranking slightly higher than Meta’s 70b model, despite being 2.5 times smaller. However, digging into the technic... Read more

AI Wiki

A colleague pitched the idea of creating an “AI Wiki” with best pratices. I am not fond of the idea because there’s way too much nuance to consider still, which would likely render any Wiki article flat out incorrect. A major reason for this is that perhaps most users are using AI Systems and not really the the models themselves - and there is ... Read more