Nils Durner's Blog Ahas, Breadcrumbs, Coding Epiphanies

OpenAI API: 4.1 in, 4.5 out

OpenAI have announced their newest flagship model family: GPT-4.1. It comes in three sizes: GPT-4.1, 4.1-mini, and 4.1-nano.

Two versions of GPT-4.1 were available in the last weeks for community testing via OpenRouter as “Optimus Alpha” and “Quasar Alpha”. There, I noticed that the gender bias sample showcased in the Stanford HAI AI Index Report was present there as well - but didn’t show with GPT-4o. The released version of GPT-4.1 exhibits this as well.

For coding tasks - one of the strengths advertised - its behaviour has changed from GPT-4o: it sometimes gives pseudo diff files. These lack file numbers, so won’t apply with GNU patch, but interesting trait nonetheless. And it’s fast, compared to GPT-4.5.

GPT-4.5 is set to be removed from the API on 2025-07-14, per the deprecation page. Ethan Mollick comments: “You can increasingly only access the high power, high cost models (GPT-4.5, Deep Research) through ChatGPT while the API supports cheap fast models (the new GPT-4.1).”. I don’t agree, however: the recent release of the o1-pro model on the API is a counter-example. Further, there is no confirmation that ChatGPT would retain it or Azure might discontinue it also. (Azure discontinuation page, GPT-4-32K had a later discontinuation date on Azure than on the OpenAI API).

[Update 2025-04-15]

  • Ethan Mollick countered on X, alledging that neither Operator or Deep Research were available on the API. Not true, however: the building blocks of Operator are available, as the model “computer-use-preview” and the tool “computer_use_preview” in the Responses API (my response)
  • Simon Willison remarks that GPT-4.1 is trained “up to May 31, 2025”. Also, he remarks that describing an image with GPT-4.1-nano cost him just 0.0232 US cents.
  • per the Prompting Guide, ideal prompting is more complex that simple language use. Particularly, “ideally place your instructions at both the beginning and end of the provided context” for long contexts is back. The prompting advice given in my article about process visualization still seems solid, though.