12 Days of OpenAI: The complete guide to daily AI breakthroughs and launches
OpenAI unwraps a series of groundbreaking AI announcements in their special year-end showcase starting at 10am PT daily.
In a festive twist on traditional tech announcements, OpenAI has launched its “12 Days of OpenAI” event, hoping to turn the end of 2024 into an AI innovation showcase. Each day at 10am PT, the company behind ChatGPT and DALL·E will unveil new developments that are reshaping the artificial intelligence landscape. As the event unfolds, we’ll be documenting each day’s releases here, providing you with comprehensive coverage of OpenAI’s latest releases.
Day 1:
- OpenAI launched ChatGPT Pro, a $200/month plan offering unlimited access to its most advanced models and enhanced productivity tools. This subscription is designed for researchers and professionals looking to push the boundaries of AI tools.
- OpenAI launched the full version of its advanced reasoning model, o1, now capable of processing both images and text. The updated o1 boasts faster speeds—completing tasks in under half the time—and a 34% reduction in error rates.
- OpenAI then released the o1 System Card, detailing safety measures like external red teaming and risk evaluations for o1 and o1-mini. The scorecard rates key risks, allowing deployment only for models scoring “medium” or below post-mitigation.
Day 2:
- OpenAI launches the Reinforcement Fine-Tuning Research Program, offering alpha access to developers, researchers, and enterprises to customize models for complex, domain-specific tasks in fields like Law, Healthcare, and Engineering. Participants will shape this new technology by providing feedback and datasets, ahead of its public release in early 2025.
Day 3:
- OpenAI has launched Sora, their groundbreaking video generation model that can create realistic videos from text descriptions and is now available to ChatGPT Plus and Pro users. The new version, called Sora Turbo, is significantly faster than the version previewed in February and allows users to generate videos up to 1080p resolution and 20 seconds long in various aspect ratios. The platform includes built-in safeguards like C2PA metadata and visible watermarks by default, while offering features like a storyboard tool for precise frame control and the ability to blend existing assets with AI-generated content.
Day 4:
- OpenAI has launched Canvas, a collaborative writing and coding tool that is now available to all ChatGPT users regardless of their subscription plan. The tool provides a side-by-side interface where users can edit documents alongside ChatGPT, with features like inline comments, formatting options, and the ability to track changes. Canvas now includes Python code execution capabilities with immediate feedback and visualization support, allowing users to run and debug code directly within the interface using a web assembly Python emulator. Additionally, OpenAI has integrated Canvas into custom GPTs, enabling developers to create specialized applications that automatically utilize Canvas when appropriate, as demonstrated with a Santa letter-writing GPT example.Copy
Day 5:
- OpenAI has launched ChatGPT integration across Apple devices, allowing users to access ChatGPT through Siri, writing tools, and camera controls on iPhone, iPad, and Mac OS. The integration enables users to invoke ChatGPT directly from the operating system, with features including document analysis, visual intelligence for camera-captured images, and seamless conversation continuation between devices. Users can enable the feature through Apple Intelligence settings and use ChatGPT either anonymously or with an account, making the AI assistant more accessible and frictionless across Apple’s ecosystem.
Day 6:
- OpenAI has launched video and screen sharing capabilities in Advanced Voice mode for ChatGPT, allowing users to have real-time visual conversations and share their screens during interactions. They’ve also introduced a Santa persona in ChatGPT that speaks with a jolly voice and shares North Pole stories throughout December. The video and screen sharing features are rolling out to teams users and most Plus and Pro subscribers (with European Plus/Pro users getting access later), while the Santa feature is available globally wherever ChatGPT voice mode is supported.
Day 7:
- OpenAI has launched “Projects” in ChatGPT, a new feature that allows users to organize conversations, upload files, set custom instructions, and tailor ChatGPT interactions specific to each project. The feature, demonstrated through examples like organizing a Secret Santa gift exchange and maintaining home documentation, includes integration with existing ChatGPT capabilities like web search, conversation search, and Canvas. Projects is rolling out to Plus, Pro, and Teams users immediately, with plans to extend to free users soon and Enterprise/EDU users in early 2025.
Day 8:
- OpenAI has launched ChatGPT search capability for all logged-in free users globally, allowing them to access real-time information and search the web directly within conversations across all platforms. The update includes improved features such as faster performance, better mobile optimization, new maps experiences, and the ability to set ChatGPT as a default search engine in browsers. Additionally, OpenAI has integrated search functionality with their advanced voice mode, enabling users to access up-to-date web information through voice conversations with ChatGPT, with this feature rolling out in the following days.
Day 9:
- OpenAI has launched GPT-4 Turbo 1.0 (previously in preview) with new features including function calling, structured outputs, developer messages, reasoning effort control, and vision capabilities in their API. The company also announced WebRTC support for their real-time API, making it easier to build voice applications, along with cost reductions of 60% for GPT-4 audio tokens and the introduction of GPT-4 mini at 10x cheaper prices. Additionally, OpenAI introduced preference fine-tuning using direct preference optimization, released new SDKs for Go and Java, and simplified their API key acquisition process.
Day 10:
- OpenAI has launched voice calling (via 1-800-CHATGPT) and WhatsApp messaging capabilities for ChatGPT, making the AI assistant accessible through traditional phone calls in the US and WhatsApp messaging globally. The phone service works on any phone type – including smartphones, flip phones, and even rotary phones – while the WhatsApp integration currently supports text-only conversations, with features like image chat planned for the future. The new services are part of OpenAI’s mission to make artificial general intelligence beneficial and accessible to humanity, with users getting 15 minutes of free calling per month on the phone service, while the WhatsApp service can be accessed immediately by scanning a QR code.
Day 11:
- OpenAI has launched significant updates to its ChatGPT desktop applications, enabling direct interaction with various desktop apps including terminal emulators, IDEs, and writing applications like Notion and Apple Notes. The updates, announced during Day 11 of their December series, include features like advanced voice mode, web search capabilities, and support for OpenAI’s latest models. These features are immediately available for macOS users with Windows support coming soon, marking a step toward OpenAI’s vision of making ChatGPT more “agentic” and actively helpful in users’ daily work.
Day 12:
- OpenAI has announced two new frontier models – O3 and O3 mini – during Day 12 of their December event series. While not immediately available for public use, OpenAI is opening access for public safety testing starting today through January 10th. O3 demonstrates exceptional performance across technical benchmarks, achieving 71.7% accuracy on software tasks (20% better than O1) and setting a new state-of-the-art score of 87.5% on the Arc AGI benchmark, surpassing human performance. O3 mini, designed for cost-efficient reasoning, matches or exceeds O1’s performance at a fraction of the cost. OpenAI plans to launch O3 mini around the end of January with O3 following shortly after, pending safety testing results.
Recent Blog Posts
The Livestream That Made 543,000 People Realize We’re Cooked
I was one of the 543,000 people that watched robots work a warehouse shift on a live stream and nobody was celebrating. That's the thing nobody talks about when they imagine the future. They talk about the economics. The efficiency gains. The disruption. What they don't talk about is how eerie it would feel to actually watch it happen in real time. On May 8th, 2025, Figure AI livestreamed humanoid robots—Helix-02 units—doing a full 8-hour shift in a warehouse. Fully autonomous. No human intervention. No puppeteers. No prerecorded segments. A live production run being broadcast with a timestamp and viewer...
May 13, 2026Apple’s Real Move and Why They Win The AI Race
I've been an Apple user since the Apple II. I remember the rainbow cable. I was in the line for the early all-in-one Macintosh. I've built software for the Mac and iOS for decades. I own a Vision Pro. I'm not a casual observer. Which is why I can tell you what I think is actually happening at Apple right now has almost nothing to do with what the tech press thinks. Tim Cook didn't step down. He stepped away from an argument he lost. On the surface, the succession reads clean: Cook becomes executive chairman. John Ternus, a hardware...
May 5, 2026Diamond Hands Are Bidding On Pez Dispensers. The Husks Are About To Run.
So here's what happened over the weekend. Ryan Cohen — the activist who turned GameStop from a dying mall retailer into the original meme stock, the patron saint of "to the moon" and "HODL" and the whole 2021 retail-revenge tableau — walked into The Wall Street Journal and announced an unsolicited $56 billion bid for eBay. Cash and stock. $125 a share. The bid is backed by GameStop's roughly 5% existing stake in eBay, $20 billion of debt-financing committed by TD Bank, $9 billion of cash on the GameStop balance sheet, and the residual halo of a stock that still...