OpenAI has announced the launch of GPT-5.4 Mini and GPT-5.4 Nano with faster performance. Here's what they have to offer.
OpenAI has announced the launch of GPT-5.4 Mini and GPT-5.4 Nano as its most capable small models yet.
"They bring many of the strengths of GPT-5.4 to faster, more efficient models designed for high-volume workloads," the company said in its announcement blog post.
According to OpenAI, GPT-5.4 mini significantly improves over GPT-5 mini across coding, reasoning, multimodal understanding, and tool use, while running more than 2x faster. It also approaches the performance of the larger GPT-5.4 model on several evaluations, including SWE-Bench Pro and OSWorld-Verified.
GPT-5.4 nano is the smallest, cheapest version of GPT-5.4 for tasks where speed and cost matter most. It is also a significant upgrade over GPT-5 nano. "We recommend it for classification, data extraction, ranking, and coding subagents that handle simpler supporting tasks," said OpenAI.
These models are designed for use cases where speed matters just as much as capability. This includes things like coding assistants that need to feel quick and responsive, smaller agents handling background tasks, systems that work with screenshots, and apps that process images in real time.
In these scenarios, the most effective model isn't always the biggest one. It's the one that can respond quickly, use tools reliably, and still handle complex tasks without slowing things down.
Also Read: OpenAI Rolls Out Interactive Learning in ChatGPT for Math and Science Concepts
In benchmarks, GPT-5.4 mini consistently outperforms GPT-5-mini at similar latencies and approaches GPT-5.4-level pass rates while running much faster, delivering one of the strongest performance-per-latency tradeoffs for coding workflows.
GPT-5.4 mini is also strong on multimodal tasks, particularly those related to computer use. The model can quickly interpret screenshots of dense user interfaces to complete computer use tasks with speed. On OSWorld-Verified, GPT-5.4 mini approaches GPT-5.4 while substantially outperforming GPT-5 mini.
GPT-5.4 mini is now available across the API, Codex and ChatGPT.
In the API, the model supports text and image inputs, along with features like tool use, function calling, web search, file search and even computer-use capabilities. It offers a large 400K context window and is priced at $0.75 per million input tokens and $4.50 per million output tokens.
In Codex, GPT-5.4 mini is available across the app, CLI, IDE extension and web version. It uses only about 30 percent of the GPT-5.4 quota, making it a more cost-effective option for simpler coding tasks. Codex can also assign lighter tasks to GPT-5.4 mini, allowing the main model to focus on more complex work.
In ChatGPT, GPT-5.4 mini is accessible to Free and Go users through the "Thinking" option in the menu. For other users, it works as a fallback when GPT-5.4 Thinking reaches its usage limits.
GPT-5.4 nano is currently available only through the API. It is priced lower at $0.20 per million input tokens and $1.25 per million output tokens.
