GPT‑4.1, GPT‑4.1 Mini, and GPT‑4.1 Nano, New OpenAI Models with Context Windows with Up to 1M Tokens

IBL News | New York

OpenAI launched three new models — GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano — in its API this month, featuring larger context windows (up to 1 million tokens) and improved performance in coding and instruction over GPT‑4o and GPT‑4o mini.

“To this end, the GPT‑4.1 model family offers exceptional performance at a lower cost,” said the company.

“For tasks that demand low latency, GPT‑4.1 nano is our fastest and cheapest model available; it delivers exceptional performance at a small size with its 1 million token context window.”

Only available via the API, GPT‑4.1 arrives as OpenAI rivals like Google and Anthropic released, respectively, Gemini 2.5 Pro, which also has a 1-million-token context window, and Claude 3.7 Sonnet. Also, Chinese AI startup DeepSeek launched an upgraded V3.

“OpenAI’s ambition is to create an “agentic software engineer,” as CFO Sarah Friar put it during a tech summit in London last month.

GPT-4.1 costs $2 per million input tokens and $8 per million output tokens. GPT-4.1 mini is $0.40/million input tokens and $1.60/million output tokens, and GPT-4.1 nano is $0.10/million input tokens and $0.40/million output tokens.