OpenAI is rolling out significant updates to ChatGPT in direct response to user complaints about its latest GPT-5 model. The changes, announced by CEO Sam Altman on X, address key frustrations around limited flexibility, removed features, and the model’s interaction style.
The updates come after users criticized OpenAI for removing popular features when GPT-5 launched, particularly the disappearance of GPT-4o from the model selection menu and restrictive usage limits on the new reasoning-focused capabilities. These modifications represent one of the fastest corporate responses to user feedback in recent OpenAI history.
New speed modes give users control over performance
ChatGPT now offers three distinct speed modes for GPT-5: Auto, Fast, and Thinking. Each mode optimizes the AI’s processing approach for different types of tasks.
Auto mode serves as the default option, automatically balancing response speed with analytical depth based on the complexity of your request. When you ask a simple question like “What’s the weather like?”, Auto delivers a quick response. For more complex requests requiring analysis, it takes additional time to provide thorough answers.
Fast mode prioritizes speed above all else, making it ideal for quick brainstorming sessions, simple writing tasks, or when you need rapid-fire responses during time-sensitive work. Think of it as ChatGPT’s equivalent of a quick Google search—immediate but potentially less comprehensive.
Thinking mode represents the opposite approach, dedicating extra processing time to complex reasoning tasks. This mode excels at mathematical problems, detailed analysis, strategic planning, and multi-step logical challenges. While responses take longer, the quality of reasoning typically improves significantly for sophisticated requests.
Expanded limits address usage frustrations
OpenAI has substantially increased the weekly message limit for GPT-5 Thinking mode from previous restrictions to 3,000 messages. This change addresses complaints from heavy users who frequently hit usage caps during intensive work sessions.
When users exceed the 3,000-message threshold, they gain access to GPT-5 Thinking mini, a streamlined version that maintains reasoning capabilities while managing computational costs. This tiered approach ensures continued access rather than complete cutoffs.
The Thinking model now supports a context limit of 196,000 tokens—roughly equivalent to 150,000 words or about 300 pages of text. In practical terms, this means ChatGPT can now process entire business reports, lengthy research papers, or extended conversation histories without losing track of earlier context. Previously, users had to break large documents into smaller chunks for analysis.
Model selection returns with improvements
Following significant user backlash, GPT-4o has returned to the model picker for all paid ChatGPT subscribers. OpenAI had removed this popular model when GPT-5 launched, forcing users to rely solely on the newer system despite many preferring GPT-4o’s response style and speed.
A new “Show additional models” toggle in ChatGPT’s web settings now provides access to the full range of available models, including o3, 4.1, and GPT-5 Thinking mini. This approach keeps the interface clean for casual users while giving power users access to specialized models.
GPT-4.5 remains exclusive to Pro subscribers due to its substantial computational requirements and associated GPU costs. Graphics processing units (GPUs) power AI model operations, and more sophisticated models require significantly more expensive hardware resources to run effectively.
Personality adjustments on the horizon
Altman revealed plans to update GPT-5’s conversational personality to feel “warmer” without becoming as polarizing as GPT-4o’s tone. Many users found GPT-4o either too casual or overly familiar, while others appreciated its more conversational approach.
Looking ahead, OpenAI plans to introduce per-user customization options, allowing individuals to tailor the AI’s communication style to their preferences. This could include adjusting formality levels, response length, or even personality traits to match different professional contexts.
What this means for users
These updates primarily benefit ChatGPT’s paid subscribers, who gain significantly more flexibility in how they interact with the AI. Business users can now choose appropriate speed modes for different tasks—Fast for quick email drafts, Thinking for strategic analysis, and Auto for general productivity work.
The expanded context limits particularly benefit professionals working with lengthy documents, researchers analyzing large datasets, or teams maintaining extended project discussions with ChatGPT. Previously, these users had to repeatedly re-establish context as conversations exceeded length limits.
For organizations evaluating AI tools, these changes signal OpenAI’s commitment to responsive product development and user feedback integration. The rapid implementation of user-requested features demonstrates the company’s focus on maintaining competitive positioning in the increasingly crowded AI assistant market.
The updates reflect a broader trend in AI development where user experience considerations increasingly drive technical decisions. Rather than forcing users to adapt to new model capabilities, OpenAI is adapting its technology to meet diverse user preferences and workflows.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...