OpenAI’s automated customer support bot has been caught hallucinating features that don’t exist in the ChatGPT app, including falsely claiming users can report bugs directly within the application. The incident highlights significant gaps in AI-powered customer service and raises questions about the reliability of companies using their own AI tools for user support.
What you should know: A ZDNET investigation revealed that OpenAI’s support bot consistently provided incorrect information about ChatGPT’s functionality when asked about reporting a bug.
- The bot repeatedly suggested users could “report this bug directly from within the ChatGPT app (usually under Account or Support > Report a Problem/Send Feedback),” despite no such feature existing.
- When confronted with the error, the bot acknowledged: “Thank you for pointing that out — you’re correct. The ChatGPT iPad app currently only allows reporting individual messages or chats.”
- OpenAI confirmed via email that they “don’t currently offer a dedicated consumer-facing bug reporting portal for the ChatGPT app.”
The original problem: The investigation began when ZDNET Senior Contributing Writer Tiernan Ray discovered a reproducible bug in the iPad Pro version of ChatGPT that caused the app to freeze when resizing windows.
- Resizing the app to full-screen or near full-screen modes rendered it completely unresponsive, with interface elements becoming stretched and touch functions failing.
- The bug was reliably recreated across multiple iPad Pro devices and persists in the latest app version 1.2025.280.
Email support failed too: OpenAI’s automated email support system repeated the same hallucinated advice about non-existent bug reporting features.
- Support emails from [email protected] provided identical incorrect instructions about reporting bugs within the app.
- The automated system again conceded the error when challenged: “Thank you for pointing that out — you’re correct.”
Why this matters: The incident exposes fundamental flaws in AI-powered customer service implementations, particularly when companies deploy their own AI tools without adequate training or oversight.
- OpenAI’s support bot, presumably trained on the company’s development documents, lacked accurate knowledge about basic app features.
- The lack of bug reporting functionality is “surprising, as it’s a standard offering with most software,” according to the investigation.
What they’re saying: An OpenAI spokesperson confirmed the limitations while defending their approach:
- “We don’t currently offer a dedicated consumer-facing bug reporting portal for the ChatGPT app. People can find troubleshooting steps and ways to share feedback through our official Help Center at help.openai.com.”
- “We’re working to make the automated support replies more accurate and effective.”
Recent improvements: Follow-up testing showed the support bot has become more accurate about the app’s actual capabilities.
- The bot now states upfront that there’s no way to report bugs in the app and asks for problem descriptions that can be forwarded to developers.
- This suggests OpenAI has updated the bot’s training data to reflect accurate information about ChatGPT’s features.
The bigger picture: The incident raises concerns about the widespread adoption of AI for customer service roles, particularly when companies haven’t adequately trained their systems on accurate product information.
OpenAI's own support bot has no idea how ChatGPT works