339
Audio & Video Production331
Software Development243
Automation & Workflow215
Writing & Content Creation194
Marketing & Growth184
Design & Creative162
AI Infrastructure & MLOps164
Photography & Imaging151
Voice & Speech130
Data & Analytics128
Education & Learning123
Customer Support120
Sales & Outreach120
Research & Analysis94
OpenAI has switched ChatGPT’s default to GPT-5.5 Instant, saying it makes fewer mistakes in sensitive topics while staying fast, and adds clearer memory sources.
In short: OpenAI has released GPT-5.5 Instant and made it the new default model in ChatGPT.
OpenAI says GPT-5.5 Instant will replace GPT-5.3 Instant as the standard model most people use in ChatGPT. A “model” is the version of the AI behind the chat, like swapping the engine in a car while keeping the same dashboard.
The company says the new model is designed to reduce “hallucinations,” which are confident-sounding but wrong answers, especially in areas like law, medicine, and finance. OpenAI also says it keeps “low latency,” meaning it should respond quickly.
OpenAI pointed to test results to show the model is stronger at some tasks. It scored 81.2 on the AIME 2025 math test, compared to 65.4 for the older model. It also scored higher on MMMU-Pro, a test that checks how well an AI handles mixed information like text and images.
The update also focuses on “context management,” meaning the system can better use what it already knows about your past chats and files. OpenAI says GPT-5.5 Instant can use a search tool to refer back to past conversations, files, and Gmail to give more personalized answers. This is rolling out first to Plus and Pro users on the web, with mobile planned later, and broader access expected in the coming weeks.
ChatGPT will also show “memory sources” across models, so you can see what past information it used and delete or correct it. OpenAI says shared chats will not show these memory sources to other people.
Many people use ChatGPT for advice, writing help, and quick research. If the system makes fewer mistakes while staying fast, it can be more reliable for everyday decisions, especially when the topic is sensitive.
Source: TechCrunch AI