OpenAI appears to be testing GPT-5 based on leaked configuration files and internal biosecurity tools, with engineer Tibor Blaho sharing a partial screenshot hinting at “GPT-5 Reasoning Alpha” dated July 13, 2025. The anticipated model promises to unify memory, reasoning, vision, and task completion into a single system, potentially transforming how users interact with AI by handling complex multi-step requests through one prompt.
What you should know: Multiple sources point to GPT-5 being in active testing phases, though no official release date has been announced.
- A leaked config file referenced “GPT-5 Reasoning Alpha,” while independent researchers discovered mentions of GPT-5 in OpenAI’s internal BioSec Benchmark repository.
- OpenAI’s Xikun Zhang explicitly confirmed that GPT-5 “is coming” during discussions about the new ChatGPT Agent feature.
- The model is expected to launch within the next few months, likely with limited availability for higher-tier ChatGPT subscribers initially.
The big picture: GPT-5 represents a fundamental shift toward unified AI capabilities rather than switching between separate features.
- Users could theoretically ask it to interpret an image, send an email, schedule a meeting, and compose a vocal summary all from a single prompt.
- A parent could coordinate school schedules, meal plans, and birthday party logistics simultaneously, or someone could plan a trip, book hotels, update their calendar, and email family details in one request.
- The model reportedly features a million-token context window and enhanced long-term memory capabilities.
Why this matters: GPT-5 is designed to address persistent issues with hallucination and nuanced misunderstanding that have limited trust in current AI models.
- The unified approach could eliminate the need to switch between different AI tools for various tasks.
- Enhanced memory features build on OpenAI’s quiet rollout of long-term memory in ChatGPT, potentially making the AI more personalized and contextually aware.
- Feedback from ChatGPT Agents users may actually be incorporated into GPT-5’s final training.
Safety concerns: Testing in biosecurity contexts has raised questions about potential risks and safeguards.
- If GPT-5 can reason about biology well enough for complex research, concerns exist about whether it could provide dangerous information if prompted inappropriately.
- OpenAI has promised built-in safeguards, though history shows users often find ways around digital security measures.
- The company is likely waiting to ensure the model won’t face embarrassing failures upon launch, following lessons from previous releases.
Rumors of GPT-5 are multiplying as the expected release date approaches