Nils Durner's Blog Ahas, Breadcrumbs, Coding Epiphanies

o1 Pro Mode & Llama 3.3

Quick notes on last week’s foundation model releases: OpenAI o1 o1 was released through ChatGPT: it’s a stark improvement over the o1-preview available through API; o1-preview basically not representative. The new “o1 Pro Mode” is a class of its own: it aces through almost all of the subject tasks in a survey paper I have under submission, and ... Read more

[UPDATED] Amazon Nova foundation model release

[Update 2025-07-21: AWS has added Amazon Bedrock API keys. I haven’t tried this myself yet, but this could be a simplification to setting up IAM as described below.] Since there’s community interest in how to set up AWS to use the new Amazon Nova models, here’s a step-by-step guide to get everyone started: Ensure you have model access: ... Read more

AI Agency: Philosophical Foundations

The term “AI Agent” has become increasingly prevalent in discussions about artificial intelligence, yet its meaning remains somewhat ambiguous. This ambiguity stems partly from different conceptualizations of agency across disciplines and languages. A recent LinkedIn discussion, sparked by Maximilian Seeth’s introduction to AI ethics, highlighte... Read more

German NER experiments: Presidio, spaCy, GLiNER

As I experimented with the Microsoft Presidio live demo for PII, I found that neither model does very well with German language when the objective is to also identify organization names. Cloning the HuggingFace space that hosts this demo allows one to enable use of other models (through setting the environment variable ALLOW_OTHER_MODELS = 1), b... Read more

Computation with LLMs

Popular wisdom holds that Language Models are “not made for computation” - and such is thus best avoided. This is backed by this study that confirms limitations also with o1 (albeit much higher). This does not hold true for “Language Models like ChatGPT”, e.g. as claimed by Tech Crunch, however: as an AI System, it extends beyond the basic Large... Read more