OpenAI has launched ChatGPT agent mode, a new capability that allows ChatGPT to perform complex, multi-step tasks autonomously rather than just providing text-based responses. Available to Pro, Plus, and Team users, the feature enables the AI to navigate websites, compile research, create presentations, and complete real-world assignments while maintaining user control and approval throughout the process.
What you should know: ChatGPT agent mode transforms the AI from a conversational assistant into an active task executor that can work across multiple platforms and applications.
- Users can ask ChatGPT to review calendars and summarize upcoming meetings, plan recipes and purchase ingredients, or analyze competitors and compile slide decks.
- The agent can navigate websites, securely log in with user permission, run code, and deliver outputs in editable formats like spreadsheets or slides.
- Users retain control throughout, with ChatGPT requesting explicit approval before submitting forms or handling sensitive information.
How it works: The new capability combines three core OpenAI technologies to enable seamless task completion.
- Operator handles web navigation and site interaction.
- Deep research synthesizes information across multiple sources.
- ChatGPT’s core model provides natural language understanding and reasoning.
In plain English: Think of it like having a digital assistant that can actually use your computer—clicking through websites, gathering information from different sources, and putting it all together into useful documents, while still asking for your permission before doing anything important.
Key details: Users can access the feature through any ChatGPT conversation by selecting ‘agent mode’ from the tools dropdown.
- Once activated, the system can carry out multi-step workflows that typically require switching between apps, browser tabs, or tools.
- The feature is designed to handle complex assignments from start to finish using a virtual computer environment.
Safety measures: OpenAI has implemented comprehensive safeguards to prevent misuse and maintain user control.
- The agent avoids high-risk actions like sending emails, making purchases, or offering legal or financial advice without user approval.
- It has been trained to recognize and reject malicious or ambiguous instructions and alerts users to uncertainty or potentially sensitive actions.
- OpenAI has deployed always-on classifiers, refusal training for dual-use scenarios, and enforcement pipelines to prevent misuse, particularly involving biological or chemical threats.
What they’re saying: OpenAI emphasized their cautious approach to potential risks, even without direct evidence of harm.
- “We don’t have direct evidence the model could help a novice create severe biological or chemical harm,” OpenAI noted, “but we are exercising caution.”
The big picture: This release represents a fundamental shift from conversational AI assistance to hands-on task execution, with OpenAI positioning it as an early step in expanding agentic AI capabilities.
- The company plans to regularly add new features and improvements over time to make ChatGPT more versatile for a broader set of users.
- The development signals the evolution of AI from answering questions to actively completing work, while maintaining human oversight and control.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...