I’ve been using OpenAI’s models since the playground days, back when you had to know what you were doing just to get them running. This was before ChatGPT became a household name, when most people had never heard of a “large language model.” Those early experiments felt like glimpsing the future.
So when OpenAI suddenly removed eight models from user accounts last week, including GPT-4o, it hit different than it would for someone who just started using ChatGPT last month. This wasn’t just a product change. It felt like losing an old friend.
The thing about AI right now is that it’s moving so fast that even the companies building it seem surprised by what happens. Sam Altman admitted the GPT-5 rollout was “a little more bumpy than we hoped for” — which is CEO-speak for “we messed up pretty badly.”
Best AI Developer Tools – Handpicked list of the top vibe coding tools
Here’s what actually happened: OpenAI’s new automatic “router” that assigns user prompts to one of four GPT-5 variants was “out of commission for a chunk of the day,” causing GPT-5 to appear “way dumber” than intended. Meanwhile, users were hitting new usage limits that made paid subscriptions feel like free trials.
The pattern here is telling. Companies in this space are moving so fast that they’re essentially doing live product testing on millions of users. And that’s both the good news and the bad news.
Here’s where it gets weird. Right as regular users were posting screenshots of GPT-5’s bizarre errors and launching Change.org petitions, a carefully orchestrated PR campaign was running in parallel.
Just weeks before the chaotic rollout, OpenAI had invited five select developers—including Theo Browne—to their headquarters for a “preview event.” They were given early access to the new models and filmed by a professional camera crew while experimenting with GPT-5. All participants had to sign NDAs and video release waivers.
The resulting promotional video went live on YouTube around the same time users were revolting. So while your average ChatGPT user was discovering their workflows had been broken overnight, a curated group of influencers was showcasing GPT-5’s capabilities in a controlled, professional setting.
Now, I generally enjoy Theo’s content—his takes on tech are usually solid and his coding videos are genuinely helpful. But watching his enthusiastic praise of GPT-5 while Twitter was flooding with user complaints felt… cringeworthy. The stark difference between his polished preview experience and the reality that regular users were experiencing made the whole thing feel like watching a commercial while your house is on fire.
The gap between the polished preview experience and the reality of the public rollout highlights something deeper about how these launches actually work.
As someone who’s watched this space evolve from the beginning, the leadership dynamics are… interesting. Altman acknowledged that “suddenly deprecating old models that users depended on in their workflows was a mistake”. But the fact that this mistake happened at all says something about how these companies are operating.
The good side: they do listen and adjust quickly. Within 24 hours, OpenAI brought back GPT-4o for Plus subscribers and doubled GPT-5 limits. That’s actually impressive responsiveness for a company this size.
The concerning side: users reported basic errors like maps labeling Oklahoma as “Gelahbrin” and math problems solved incorrectly, despite Altman’s claims about “PhD-level intelligence.” When your latest model can’t correctly label Oregon on a map, it casts doubt on the internal testing processes.
The real issue isn’t that GPT-5 had problems. It’s that LLM technology is fundamentally different from any other software we’ve used before.
When Microsoft updates Excel, you might be annoyed about moved menus or changed shortcuts. When OpenAI changes your AI writing partner, it feels personal. Users—myself included—build a conversational reliance with these models that goes way beyond traditional software interactions. You learn their quirks, their strengths, how to phrase prompts to get the best results. It becomes a creative partnership.
This is new territory. We’ve never had tools that felt collaborative in this way. When you build your workflow around a tool that has a particular “personality” and way of reasoning, sudden changes break more than just code — they break creative partnerships that users have spent months developing.
This goes beyond just developers and creators. Millions of people have started incorporating these AI conversations into their daily thinking—brainstorming life decisions, working through problems, even seeking emotional support. When you suddenly change the personality they’ve grown comfortable talking to, you’re not just breaking a workflow, you’re disrupting a relationship they might not even realize they’ve formed.
The core product messiness does make you wonder what else might be unstable behind the scenes. If the autoswitcher can break on launch day, what about data handling? Training processes? Safety measures? For those of us building businesses around these tools, that uncertainty is expensive.
But here’s what OpenAI and other LLM vendors need to understand: the world has changed. They can’t treat model updates like traditional software releases anymore. Users form genuine attachments to these AI personalities and workflows. Every major model change needs to be handled with the care of a major life transition, not a routine software patch.
The conversational nature of these tools creates a responsibility that software companies have never had to deal with before.
Despite the chaos, I’m still optimistic about what we can build with AI. But watching this rollout taught me something important: these companies are figuring it out as they go, just like the rest of us.
Altman believes OpenAI has “a good shot at getting this right” when it comes to balancing AI personality and functions. Maybe he’s right. The fact that they reversed course so quickly suggests they understand what’s at stake.
But the GPT-5 launch revealed a fundamental challenge that goes beyond OpenAI. LLM vendors are dealing with something unprecedented: software that users talk to, confide in, and build genuine working relationships with. This creates obligations that traditional software companies never had to consider.
The future of AI isn’t just about making smarter models. It’s about building them in a way that respects the unique bonds users form with these tools. That means better testing, more gradual rollouts, and perhaps most importantly, acknowledging that changing someone’s AI collaborator is more like replacing a colleague than upgrading software.
For now, I’m keeping backups of my workflows and watching the Sam tweets a little more carefully. Because in AI, your favorite tool can become a stranger overnight — and the industry is still learning that’s not okay.
We’ll get there. But the bumpy road ahead isn’t just about technical challenges. It’s about learning to handle the most human thing about artificial intelligence: the relationships we build with it.