×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Elon Musk’s xAI startup has powered up a massive supercomputing facility in Memphis, Tennessee to train its next AI model, Grok, with over 100,000 Nvidia H100 GPUs, sparking both excitement and concerns among locals.

Key details about xAI’s Memphis supercomputing cluster:

  • The facility, dubbed the “Gigafactory of Compute,” is now operational and began training models at 4:20am local time on Monday morning, according to Musk.
  • It leverages 100,000 liquid-cooled Nvidia H100 GPUs on a single RDMA fabric, which Musk touts as “the most powerful AI training cluster in the world.”
  • The H100 GPUs are designed specifically for training large AI models and require immense amounts of energy and computing power.

Local reactions and concerns:

  • While the Greater Memphis Chamber economic development group has praised xAI’s decision to open the facility in the area, some locals have expressed worries about its potential environmental impact.
  • The Memphis Community Against Pollution group estimates the cluster may consume over one million gallons of water per day for cooling and up to 150 megawatts of electricity per hour, equivalent to powering 100,000 homes.
  • Memphis City Council member Pearl Walker noted that people are “afraid of what’s possibly going to happen with the water and they are afraid about the energy supply.”

Putting xAI’s cluster in context:

  • While certainly massive, xAI’s Memphis facility may not necessarily be the largest computing cluster in the world, as tech giants like Microsoft, Google, and Meta also leverage huge data centers to train their AI models.
  • Meta CEO Mark Zuckerberg has vowed to acquire 350,000 Nvidia H100 GPUs this year alone, for comparison.
  • The race to build bigger and more powerful AI supercomputing clusters highlights the immense computing resources required to develop advanced AI systems and the challenges around their energy consumption.

Musk has previously stated that xAI plans to release Grok 2 in August, though it’s unclear if that model will utilize this new supercomputing cluster. However, Grok 3, slated for release by the end of 2024, will train on the 100,000 H100 GPUs in Memphis. As xAI ramps up its computing capabilities, questions remain about how the startup will address the environmental concerns its facilities raise and how it will stack up against tech giants in the race to achieve more powerful AI systems.

Elon Musk's xAI Powers Up 100K Nvidia GPUs to Train Grok

Recent News

AI Tutors Double Student Learning in Harvard Study

Students using an AI tutor demonstrated twice the learning gains in half the time compared to traditional lectures, suggesting potential for more efficient and personalized education.

Lionsgate Teams Up With Runway On Custom AI Video Generation Model

The studio aims to develop AI tools for filmmakers using its vast library, raising questions about content creation and creative rights.

How to Successfully Integrate AI into Project Management Practices

AI-powered tools automate routine tasks, analyze data for insights, and enhance decision-making, promising to boost productivity and streamline project management across industries.