OpenAI is rolling out significant updates to ChatGPT in direct response to user complaints about its latest GPT-5 model. The changes, announced by CEO Sam Altman on X, address key frustrations around limited flexibility, removed features, and the model’s interaction style.
The updates come after users criticized OpenAI for removing popular features when GPT-5 launched, particularly the disappearance of GPT-4o from the model selection menu and restrictive usage limits on the new reasoning-focused capabilities. These modifications represent one of the fastest corporate responses to user feedback in recent OpenAI history.
ChatGPT now offers three distinct speed modes for GPT-5: Auto, Fast, and Thinking. Each mode optimizes the AI’s processing approach for different types of tasks.
Auto mode serves as the default option, automatically balancing response speed with analytical depth based on the complexity of your request. When you ask a simple question like “What’s the weather like?”, Auto delivers a quick response. For more complex requests requiring analysis, it takes additional time to provide thorough answers.
Fast mode prioritizes speed above all else, making it ideal for quick brainstorming sessions, simple writing tasks, or when you need rapid-fire responses during time-sensitive work. Think of it as ChatGPT’s equivalent of a quick Google search—immediate but potentially less comprehensive.
Thinking mode represents the opposite approach, dedicating extra processing time to complex reasoning tasks. This mode excels at mathematical problems, detailed analysis, strategic planning, and multi-step logical challenges. While responses take longer, the quality of reasoning typically improves significantly for sophisticated requests.
OpenAI has substantially increased the weekly message limit for GPT-5 Thinking mode from previous restrictions to 3,000 messages. This change addresses complaints from heavy users who frequently hit usage caps during intensive work sessions.
When users exceed the 3,000-message threshold, they gain access to GPT-5 Thinking mini, a streamlined version that maintains reasoning capabilities while managing computational costs. This tiered approach ensures continued access rather than complete cutoffs.
The Thinking model now supports a context limit of 196,000 tokens—roughly equivalent to 150,000 words or about 300 pages of text. In practical terms, this means ChatGPT can now process entire business reports, lengthy research papers, or extended conversation histories without losing track of earlier context. Previously, users had to break large documents into smaller chunks for analysis.
Following significant user backlash, GPT-4o has returned to the model picker for all paid ChatGPT subscribers. OpenAI had removed this popular model when GPT-5 launched, forcing users to rely solely on the newer system despite many preferring GPT-4o’s response style and speed.
A new “Show additional models” toggle in ChatGPT’s web settings now provides access to the full range of available models, including o3, 4.1, and GPT-5 Thinking mini. This approach keeps the interface clean for casual users while giving power users access to specialized models.
GPT-4.5 remains exclusive to Pro subscribers due to its substantial computational requirements and associated GPU costs. Graphics processing units (GPUs) power AI model operations, and more sophisticated models require significantly more expensive hardware resources to run effectively.
Altman revealed plans to update GPT-5’s conversational personality to feel “warmer” without becoming as polarizing as GPT-4o’s tone. Many users found GPT-4o either too casual or overly familiar, while others appreciated its more conversational approach.
Looking ahead, OpenAI plans to introduce per-user customization options, allowing individuals to tailor the AI’s communication style to their preferences. This could include adjusting formality levels, response length, or even personality traits to match different professional contexts.
These updates primarily benefit ChatGPT’s paid subscribers, who gain significantly more flexibility in how they interact with the AI. Business users can now choose appropriate speed modes for different tasks—Fast for quick email drafts, Thinking for strategic analysis, and Auto for general productivity work.
The expanded context limits particularly benefit professionals working with lengthy documents, researchers analyzing large datasets, or teams maintaining extended project discussions with ChatGPT. Previously, these users had to repeatedly re-establish context as conversations exceeded length limits.
For organizations evaluating AI tools, these changes signal OpenAI’s commitment to responsive product development and user feedback integration. The rapid implementation of user-requested features demonstrates the company’s focus on maintaining competitive positioning in the increasingly crowded AI assistant market.
The updates reflect a broader trend in AI development where user experience considerations increasingly drive technical decisions. Rather than forcing users to adapt to new model capabilities, OpenAI is adapting its technology to meet diverse user preferences and workflows.