×
xAI Powers Up Massive Memphis Supercomputer, Sparking Local Excitement and Environmental Concerns
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Elon Musk’s xAI startup has powered up a massive supercomputing facility in Memphis, Tennessee to train its next AI model, Grok, with over 100,000 Nvidia H100 GPUs, sparking both excitement and concerns among locals.

Key details about xAI’s Memphis supercomputing cluster:

  • The facility, dubbed the “Gigafactory of Compute,” is now operational and began training models at 4:20am local time on Monday morning, according to Musk.
  • It leverages 100,000 liquid-cooled Nvidia H100 GPUs on a single RDMA fabric, which Musk touts as “the most powerful AI training cluster in the world.”
  • The H100 GPUs are designed specifically for training large AI models and require immense amounts of energy and computing power.

Local reactions and concerns:

  • While the Greater Memphis Chamber economic development group has praised xAI’s decision to open the facility in the area, some locals have expressed worries about its potential environmental impact.
  • The Memphis Community Against Pollution group estimates the cluster may consume over one million gallons of water per day for cooling and up to 150 megawatts of electricity per hour, equivalent to powering 100,000 homes.
  • Memphis City Council member Pearl Walker noted that people are “afraid of what’s possibly going to happen with the water and they are afraid about the energy supply.”

Putting xAI’s cluster in context:

  • While certainly massive, xAI’s Memphis facility may not necessarily be the largest computing cluster in the world, as tech giants like Microsoft, Google, and Meta also leverage huge data centers to train their AI models.
  • Meta CEO Mark Zuckerberg has vowed to acquire 350,000 Nvidia H100 GPUs this year alone, for comparison.
  • The race to build bigger and more powerful AI supercomputing clusters highlights the immense computing resources required to develop advanced AI systems and the challenges around their energy consumption.

Musk has previously stated that xAI plans to release Grok 2 in August, though it’s unclear if that model will utilize this new supercomputing cluster. However, Grok 3, slated for release by the end of 2024, will train on the 100,000 H100 GPUs in Memphis. As xAI ramps up its computing capabilities, questions remain about how the startup will address the environmental concerns its facilities raise and how it will stack up against tech giants in the race to achieve more powerful AI systems.

Elon Musk's xAI Powers Up 100K Nvidia GPUs to Train Grok

Recent News

Baidu reports steepest revenue drop in 2 years amid slowdown

China's tech giant Baidu saw revenue drop 3% despite major AI investments, signaling broader challenges for the nation's technology sector amid economic headwinds.

How to manage risk in the age of AI

A conversation with Palo Alto Networks CEO about his approach to innovation as new technologies and risks emerge.

How to balance bold, responsible and successful AI deployment

Major companies are establishing AI governance structures and training programs while racing to deploy generative AI for competitive advantage.