×
Quick info lookups, practicalities comprise majority of ChatGPT usage
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Three heavyweight studies have landed that pull back the curtain on what artificial intelligence usage actually looks like in practice. Reports from OpenAI, Anthropic, and Ipsos, a global market research firm, provide something rare in the AI hype cycle: concrete evidence about who uses these systems, what they do with them, and how the public really feels about this technology.

OpenAI released usage data from more than one million ChatGPT conversations spanning mid-2024 to mid-2025. Anthropic published analysis of Claude AI usage statistics in its Economic Index, including enterprise API traffic—the behind-the-scenes data streams that power business applications. Meanwhile, Ipsos surveyed over 23,000 adults across 30 countries for its AI Monitor 2025.

The Ipsos study proves particularly valuable because it confronts the gap between what people say and what they actually do. Economists call this the difference between stated and revealed preferences—the phenomenon where consumers declare one intention in surveys but behave differently in practice. This gap feels familiar when examining AI adoption, where usage logs from OpenAI and Anthropic often contradict what people report in surveys.

The mundane reality of AI applications

OpenAI’s dataset of over one million ChatGPT conversations reveals something sobering: people aren’t using AI to plan moon colonies or unlock superintelligence. They’re asking for writing help, practical guidance, and quick information lookups. These three categories alone account for nearly 80% of ChatGPT traffic. Computer programming represents just 4% of usage, while therapy-like reflection barely reaches 2%.

Even in professional settings, writing assistance dominates—but not the sophisticated content creation featured in marketing campaigns. Two-thirds of writing-related queries involve people asking the system to polish something they’ve already written, like editing emails or refining reports.

Anthropic’s Claude paints a similar picture with different emphases. Coding leads at 36% of usage, but education and science are rising rapidly to 12.4% and 7.2% respectively. Claude users also tend to delegate complete tasks more often, providing directives like “create this presentation” rather than engaging in step-by-step collaboration.

Across both platforms, exotic use cases exist in the long tail of applications, but adoption clusters around obvious sweet spots—tasks where AI models perform reliably and barriers to entry remain low. The science fiction scenarios remain mostly confined to marketing materials.

The work versus personal usage divide

Here’s where the data becomes complex and seemingly contradictory. OpenAI reports that ChatGPT’s workplace usage dropped from 40% to 28% over the past year, while personal experimentation jumped to nearly three-quarters of all interactions. Ipsos confirms this broad perception across many countries, where AI feels more like a personal assistant than an enterprise backbone.

However, Anthropic tells a different story entirely. Its enterprise API data—the technical infrastructure that businesses use to integrate AI into their operations—suggests U.S. workplace adoption is surging. The company reports that 40% of American employees now use AI at work, up dramatically from just 20% in 2023. These API logs reveal concentrated, automation-heavy deployments: debugging web applications, building business software, and even designing AI systems themselves.

The apparent contradiction resolves when you understand the different types of AI usage. Chat interfaces like ChatGPT serve casual users and side projects—the visible tip of the iceberg. APIs represent where serious business implementation happens, often invisibly integrated into existing workflows and applications. As Anthropic’s report warns, “whether today’s narrow, automation-heavy adoption evolves toward broader deployment will likely determine AI’s future economic impacts.”

This suggests the adoption curve isn’t simply about decline versus growth, but rather about which type of usage becomes dominant in the long term.

The trust paradox in AI adoption

Ipsos’s global survey reveals striking ambivalence toward AI governance and safety. More than half (54%) of respondents trust governments to regulate AI responsibly, while only 48% trust companies to protect their personal data. The margin is narrow but telling, especially given that private companies currently drive most AI development and deployment.

This paradox became visible at the Paris AI Summit, where OpenAI CEO Sam Altman embodied the conflicted relationship between safety concerns and market demands. “Safety is integral to what we do,” Altman told attendees. “We’ve got to make these systems really safe for people, or people just won’t use them.”

Yet moments later, he acknowledged a different reality: “That’s not actually the main thing that we’ve been hearing about—the main concern has been ‘can we make this cheaper, can you have more of it, can we get it better and more advanced?'”

Safety gets mentioned but not emphasized, while themes of scale, cost, and capability dominate actual customer conversations. The paradox deepens when you consider that people express distrust of AI companies in surveys, yet usage data shows they continue rewarding these same companies with daily reliance and engagement.

Hidden barriers to enterprise AI adoption

Why hasn’t corporate AI adoption reached the mainstream penetration that early predictions suggested? Anthropic’s analysis is refreshingly direct: realizing productivity gains depends less on cutting-edge AI capabilities than on the unglamorous details of implementation.

Profitable AI adoption often requires expensive restructuring of business processes, extensive worker retraining, and significant upfront investments. Consider a law firm wanting to use AI for contract review. The technology exists, but successful deployment means redesigning workflows, training lawyers on new tools, establishing quality control processes, and potentially restructuring how the firm prices its services. These organizational changes can cost more than the AI technology itself.

Another critical bottleneck involves context—the background information AI systems need to deliver useful results in complex business environments. For AI to excel in high-stakes settings, it requires rich, well-structured data tailored to specific tasks. Many companies can’t yet provide this context effectively, as it often demands costly data modernization projects and organizational restructuring that makes deployment slower and more expensive than promotional materials suggest.

Individual adoption faces different obstacles. Ipsos data shows usage remains concentrated among young, male, well-educated users, creating disparities in who benefits first and who gets left behind. There’s irony in the most common personal use cases—seeking guidance and information—being the same applications most vulnerable to AI hallucinations and misinformation.

Between habit formation and hesitation

Taken together, these three studies sketch a clear picture of AI’s current reality. The technology is predominantly used for ordinary tasks: retrieving information, editing communications, and debugging code. OpenAI’s logs suggest ChatGPT’s workplace usage is declining, while Ipsos finds AI perceived primarily as a personal helper. Yet Anthropic’s enterprise data shows 40% of U.S. employees already incorporating AI into their work routines.

What appears contradictory may simply represent different layers of adoption: visible personal experimentation on the surface, with invisible enterprise integration occurring underneath through API connections and automated workflows.

However, a glaring paradox emerges from this data: AI adoption continues surging while trust in its builders remains limited. Perhaps the real risk isn’t whether people will abandon AI technology, but whether they’ll normalize dependence on systems they claim to distrust.

The future of artificial intelligence may not be determined by what people say in surveys or what executives promise in keynote presentations. Instead, it will likely be shaped by the accumulated weight of millions of small decisions—the mundane, practical choices people make every day when they decide what to type in the prompt box.

Here’s How People Use AI, According To OpenAI, Anthropic And Ipsos Data

Recent News

Former ClickUP leader: Work sprawl is killing productivity, but here’s how AI can fix it

High-performing teams use nine or fewer tools while struggling teams juggle fifteen or more.

Go, Figure! Company secures deal to deploy 100K humanoid robots in 4 years

The American company aims to challenge Chinese robotics dominance with one mystery Fortune 500 partner.

Meta’s AI demo failures blamed on self-inflicted DDoS wound

Meta's own servers buckled under the weight of its devices' simultaneous activation.