
OpenAI Launches ChatGPT Images 2.0 With Web Search And Multiple Images From One Prompt
Key Takeaways
- Generates multiple images from a single prompt.
- Web search and reasoning guide image creation.
- Available on ChatGPT and Codex.
Images 2.0 goes live
OpenAI has rolled out ChatGPT Images 2.0, an upgraded image generator built into its chatbot, and the new system is designed to produce more than one image from a single prompt while adding “thinking capabilities” that can search the web.
“OpenAI is announcing its upgraded ChatGPT image generation model with ChatGPT Images 2”
TechCrunch reports that OpenAI declined to answer a question in a press briefing about what kind of model is powering ChatGPT Images 2.0, but said the new model has “thinking capabilities,” which give it the ability to search the web, make multiple images from one prompt, and double-check its creations.

The Verge similarly says OpenAI’s update allows Images 2.0 to search the web when a thinking model is selected, and to “reason through the structure of the image before generating.”
SiliconANGLE adds that OpenAI launched ChatGPT Images 2.0 alongside a new technical training service called Codex Labs, and says the image generator can produce images with a maximum width of 2,000 pixels in multiple aspect ratios.
Multiple outlets tie the release to a specific availability window: TechCrunch says “All ChatGPT and Codex users will be able to access Images 2.0 starting Tuesday,” while Engadget says “Images 2.0 is available starting today for all ChatGPT users.”
In addition to the core image model, OpenAI is also making the gpt-image-2 API available, with pricing dependent on the quality and resolution of outputs, according to TechCrunch.
Text, pixels, and formats
A central focus of the Images 2.0 rollout is improved text rendering and more reliable placement of small visual elements, with OpenAI describing the model as capable of preserving fine-grained details that “often break image models.”
TechCrunch quotes OpenAI saying Images 2.0 can render “small text, iconography, UI elements, dense compositions, and subtle stylistic constraints,” and says it can do so “all at up to 2K resolution.”

SiliconANGLE likewise emphasizes that the tool is better at generating images that contain Japanese, Korean, Chinese, Hindi and Bengali text, and says it is also improved for “small text, interface elements, icons and other visual assets that historically posed a challenge.”
PetaPixel frames the update as expanding Images 2.0 from a creative tool into a “visual workflow platform,” and quotes OpenAI: “Images are a language, not decoration. A good image does what a good sentence does — it selects, arranges, and reveals.”
The Decoder adds that the model handles text “in general, and especially in non-Latin scripts, significantly better,” and states that aspect ratio support ranges from 3:1 to 1:3 while resolution goes up to 2K through the API.
Engadget describes the release as a “step change” for image generation models, particularly for following instructions in detail, rendering dense text, and placing and relating objects in a scene.
Thinking mode and web search
OpenAI’s “thinking capabilities” are positioned as the mechanism that changes how Images 2.0 behaves, moving from single-shot generation toward multi-step workflows that can incorporate web information and verify outputs.
TechCrunch says OpenAI explained that the new model’s “thinking capabilities” allow it to search the web, make multiple images from one prompt, and double-check its creations, and it adds that this enables Images 2.0 to create marketing assets in various sizes as well as multi-paneled comic strips.
The Verge reports that when a thinking model is selected, the chatbot’s image generator can pull information from the web, create visual explainers based on files you upload, and “reason through the structure of the image before generating.”
The Decoder describes the same idea as “reasoning and web search,” saying the model can now create up to eight consistent images from a single prompt and that it “thinks” before it generates, spending more or less time reasoning depending on the selected mode.
PetaPixel quotes OpenAI saying Images 2.0 is “our first image model with thinking capabilities,” and says that with thinking or pro models it can analyze tasks more deeply, incorporate real-time information, and generate multiple outputs in a single request.
SiliconANGLE adds that users with paid plans can expand the tool’s knowledge base by activating ChatGPT’s “thinking” and “pro” reasoning modes, and says the two settings enable Images 2.0 to round out information with data from the public web.
API pricing and cost
Beyond the chatbot experience, OpenAI is also exposing the new image model through an API under the name gpt-image-2, and The Decoder provides detailed token-based pricing figures that vary by input and output tokens.
It says OpenAI charges on a token basis: $8 per million image input tokens and $30 per million image output tokens, and it adds that text tokens cost $5 (input) and $10 (output) per million.
The Decoder also reports that cached inputs are cheaper, and it gives example per-image costs for a 1024 x 1024 image at low, medium, and high quality, stating “$0.006,” “$0.053,” and “$0.211.”
It further notes that larger resolutions like 1024 x 1536 “actually come in slightly cheaper,” listing “$0.005,” “$0.041,” and “$0.165,” and it compares those to predecessors by stating that at 1024 x 1536 at high-quality costs $0.165, compared to $0.20 for GPT Image 1.5 and $0.25 for GPT Image 1.5.
WIRED describes the model’s global availability for ChatGPT and Codex users, with a more powerful version available for paying subscribers, while TechCrunch says the company will make the gpt-image-2 API available with pricing dependent on the quality and resolution of outputs.
The Decoder also warns that “API outputs above 2K are still in beta and may produce inconsistent results,” which ties back to the repeated emphasis on up to 2K resolution in other coverage.
Codex Labs and rollout
OpenAI’s Images 2.0 launch is paired with a push to help organizations adopt its developer tools, including a new Codex Labs initiative described as a training service for deploying Codex programming assistant.
“OpenAI launches ChatGPT Images 2”
SiliconANGLE says OpenAI debuted a new technical training service called Codex Labs, designed to help organizations adopt OpenAI’s Codex programming assistant, and it says the offering provides access to workshops and other training to make it easier for a company’s developers to adopt the tool.

The 9to5Mac coverage similarly says OpenAI is scaling up Codex for enterprise with a new Codex Labs initiative and quotes the program’s goal as helping enterprises put Codex to work on real problems through “hands-on workshops and working sessions.”
TechCrunch also notes that OpenAI will make Images 2.0 available to all ChatGPT and Codex users starting Tuesday, with paid users able to generate more advanced outputs, and it says the model’s knowledge cuts off in December 2025.
The Verge adds that the thinking capabilities are available to ChatGPT Plus, Pro, Business, and Enterprise subscribers, and Engadget says Plus and Pro subscribers get access to more advanced outputs.
Taken together, the rollout ties the new image model’s capabilities—web-enabled thinking, improved text rendering, and up to 2K resolution—to a broader enterprise adoption effort through Codex Labs.
More on Technology and Science

UK Agrees Tobacco And Vapes Bill Banning Cigarette Sales To People Born After 1 January 2009
13 sources compared

Amazon Invests $5 Billion in Anthropic, Secures $100 Billion AWS Cloud Commitment
15 sources compared

Deezer Says 44% of Daily Song Uploads Are AI-Generated, Nearly 75,000 Tracks Per Day
15 sources compared

Apple Names John Ternus CEO as Tim Cook Becomes Executive Chairman Effective September 1, 2026
61 sources compared