Fastly’s new report reveals that AI crawlers and fetchers are overwhelming websites with traffic that accounts for a staggering 80 percent of all AI bot activity, with some bots hitting sites with over 39,000 requests per minute. The surge is primarily driven by Meta (52% of crawler traffic) and OpenAI (98% of fetcher traffic), creating unsustainable server loads that threaten website performance and the business models of content creators.
What you should know: AI bots are fundamentally reshaping internet traffic patterns, with crawlers scraping training data and fetchers delivering real-time responses creating new operational challenges.
- Fastly, a cloud services company, analyzed data from over 130,000 applications and APIs, inspecting more than 6.5 trillion requests monthly, providing comprehensive visibility into AI bot behavior.
- Meta dominates AI crawler traffic at 52 percent, followed by Google (23%) and OpenAI (20%), with these three companies controlling 95 percent of all AI crawler activity.
- OpenAI overwhelmingly leads AI fetcher traffic at nearly 98 percent, indicating either market dominance from ChatGPT’s early consumer adoption or potential infrastructure optimization needs.
The traffic breakdown: Different AI companies show vastly different usage patterns between crawling for training data versus real-time information fetching.
- Anthropic, an AI company, accounts for just 3.76 percent of crawler traffic, while the Common Crawl Project represents only 0.21 percent despite its mission to reduce duplication.
- Perplexity AI, recently accused of ignoring robots.txt directives, accounts for 1.12 percent of crawler traffic and 1.53 percent of fetcher traffic, though this is growing.
- AI fetchers, while representing only 20 percent of total AI bot requests, can generate massive traffic spikes with one bot recorded making over 39,000 requests per minute.
Why this matters: The unsustainable growth threatens website infrastructure and content creator economics while undermining the very sources AI companies depend on for data.
- “Some AI bots, if not carefully engineered, can inadvertently impose an unsustainable load on webservers, leading to performance degradation, service disruption, and increased operational costs,” Fastly’s report warned.
- Small site operators serving dynamic content are most severely affected, facing operational challenges that could force them offline or behind paywalls.
Industry pushback emerges: Website operators are increasingly deploying active countermeasures as polite opt-out mechanisms like robots.txt are frequently ignored.
- Defensive tools like proof-of-work system Anubis and tarpit system Nepenthes are gaining adoption to make scraping computationally expensive for AI companies.
- Cloudflare, a web infrastructure company, is testing a pay-per-crawl approach to create financial barriers for bot operators, while webmasters implement increasingly sophisticated blocking techniques.
What they’re saying: Experts emphasize the need for industry standards and responsible crawling practices while warning against premature regulation.
- “At a minimum, any reputable AI company today should be honoring robots.txt. Further and even more critically, they should publish their IP address ranges and their bots should use unique names,” Fastly’s Arun Kumar told The Register.
- Anubis developer Xe Iaso, CEO of Techaro, offered a stark perspective: “I can only see one thing causing this to stop: the AI bubble popping. There is simply too much hype to give people worse versions of documents, emails, and websites otherwise.”
The regulatory question: While technical solutions proliferate, some experts argue only government intervention can address the fundamental problem.
- “This is a regulatory issue. The thing that needs to happen is that governments need to step in and give these AI companies that are destroying the digital common good existentially threatening fines,” Iaso said.
- Kumar advocates for industry-led solutions first: “Mandating technical standards in regulatory frameworks often does not produce a good outcome and shouldn’t be our first resort.”
Looking ahead: Fastly expects fetcher traffic to accelerate as AI tools become more widely adopted and agentic systems that mediate between users and websites proliferate, potentially exacerbating current infrastructure strain.
Recent Stories
DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment
The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...
Oct 17, 2025Tying it all together: Credo’s purple cables power the $4B AI data center boom
Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...
Oct 17, 2025Vatican launches Latin American AI network for human development
The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...