Nils Durner's Blog Ahas, Breadcrumbs, Coding Epiphanies

German Studies on AI Adoption: Between Skepticism and Enthusiasm

Two recent studies provide insights into the adoption and perception of artificial intelligence in Germany. The Bitkom study on “AI Usage in Germany” and Deutsche Telekom’s international YouGov survey paint a nuanced picture that challenges some common assumptions. Surprising Findings from the Bitkom Study The Bitkom study, presented at the Di... Read more

Anthropic Computer Use: ideas for Agents

Anthropic have published the “Computer Use Demo” in their Quickstarts Github repository. The approach taken is fundametally different from my Aileen project: it’s not confined to a browser controlled through Selenium and very tight guardrails, but instead controls a full GNU/Linux desktop - which is separate from the user desktop session. On the... Read more

Questioning the 'LLMs Can't Reason' Claim

A recent paper from Apple about reasoning deficits has been widely reposted as “LLMs Can’t Reason”. The study claims to demonstrate significant limitations in the reasoning capabilities of large language models (LLMs). Gary Marcus, author of a “Forbes 7 Must Read Books in AI”, railed: “There is just no way can you build reliable agents on this f... Read more

Redaction without recompression

As I was compiling bug reports for a Vision Language Model vendor, I found the need to redact images without JPEG recompression: simply re-saving a particular sample image that had originally triggered a repitition loop with the MLLM changed the image in such a way that I could not reproduce the problem. Solution: asenior/Jpeg-Redaction-Library.... Read more

Entropy-based "Shrek Sampling"

The community is porting the “Shrek Sampler” to different hardware (MLX) and Transformer architectures, just days after Entropix was first released. Nice visualization from the MLX port: 9.9 vs. 9.11. Pierre-Carl Langlais, Co-founder of Pleias.fr, posted his Colab notebook that runs entropix with Smollm-360M (the original release uses Llama 3.2 ... Read more