339
Audio & Video Production331
Software Development243
Automation & Workflow215
Writing & Content Creation194
Marketing & Growth184
Design & Creative162
AI Infrastructure & MLOps164
Photography & Imaging151
Voice & Speech130
Data & Analytics128
Education & Learning123
Customer Support120
Sales & Outreach120
Research & Analysis94
OpenAI says ChatGPT’s new default model, GPT-5.5 Instant, makes up fewer false claims and adds new features for personalization and transparency.
In short: OpenAI is making GPT-5.5 Instant the new default model in ChatGPT, and it says the model makes up wrong facts less often.
OpenAI said it is rolling out a new default model for ChatGPT called GPT-5.5 Instant. A model is the “engine” behind ChatGPT, like swapping the motor in a car while keeping the same dashboard.
OpenAI says GPT-5.5 Instant has “significant improvements in factuality,” meaning it should be less likely to confidently say things that are not true. These made-up answers are often called “hallucinations,” which is a common problem with chatbots.
According to OpenAI’s internal tests, GPT-5.5 Instant produced 52.5% fewer hallucinated claims than GPT-5.3 Instant on “high-stakes” questions in areas like medicine, law, and finance. OpenAI also says it reduced inaccurate claims by 37.3% in especially hard conversations that users had previously flagged for factual errors.
OpenAI says the model is also better at everyday tasks, including analyzing uploaded images and knowing when to look things up on the web. It will also give “tighter” responses and use fewer unnecessary emojis.
The company is also expanding personalization, such as using context from past chats and connected services like Gmail, to tailor replies. Alongside this, a new “memory sources” feature will show what information was used for a personalized answer, and users can delete or correct it.
Many people use ChatGPT for advice and explanations, and wrong answers can cause real problems, especially in health, money, or legal situations. If OpenAI’s numbers hold up in real use, fewer hallucinations could make ChatGPT more dependable, but users will still need to double-check important claims.
Source: The Verge AI