×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google DeepMind’s JEST AI training method promises significant speed and efficiency gains over traditional techniques, potentially addressing concerns about AI’s growing power demands.

Key Takeaways:

DeepMind’s JEST (joint example selection) training method breaks from traditional AI training by focusing on entire batches of data instead of individual data points:

  • A smaller AI model first grades data quality from high-quality sources and ranks batches by quality.
  • The small model then determines the batches most fit for training a larger model, resulting in up to 13 times faster training with 10 times less computation.

Addressing AI’s Power Demands: The JEST research comes at a crucial time as discussions about AI’s extreme power demands intensify:

  • AI workloads consumed about 4.3 GW in 2023, nearly matching the annual power consumption of Cyprus.
  • A single ChatGPT request costs 10 times more power than a Google search.
  • Arm’s CEO estimates AI may consume a quarter of the U.S. power grid by 2030.

Reliance on High-Quality Data: The success of the JEST method heavily depends on the quality of its initial training data:

  • The system relies on a human-curated dataset of the highest possible quality for its bootstrapping technique.
  • This makes the method more challenging for hobbyists or amateur AI developers to replicate, as expert-level research skills are likely required to curate the initial data.

Industry Adoption and Implications: How and if major AI players will adopt JEST methods remains uncertain:

  • Large language models like GPT-4 can cost hundreds of millions to train, so firms are likely seeking ways to reduce costs.
  • However, the competitive pressure to scale AI output may lead companies to use JEST to maintain maximum power draw for faster training rather than prioritizing energy savings.

Broader Implications:

While the JEST method promises significant efficiency gains, it remains to be seen whether the AI industry will prioritize cost savings and sustainability or use the technique to further accelerate the already rapid pace of AI development. As the costs of training cutting-edge AI models soar into the billions, the choices made by key players like Google could have profound implications for the future trajectory of artificial intelligence and its societal and environmental impacts.

Google claims new AI training tech is 13 times faster and 10 times more power efficient — DeepMind's new JEST optimizes training data for massive gains

Recent News

How to Use Pixel Studio to Generate AI Images on the Google Pixel 9

Google's Pixel 9 introduces AI-powered image creation through the Pixel Studio app, enabling users to generate custom visuals from text prompts and edit existing photos.

AI’s Insatiable Need for Energy is Presenting Big Investment Opportunities

The rapid expansion of AI-driven data centers is straining US power infrastructure, requiring over $500 billion in investments and potentially consuming 12% of national electricity by 2030.

AI Tutors Double Student Learning in Harvard Study

Students using an AI tutor demonstrated twice the learning gains in half the time compared to traditional lectures, suggesting potential for more efficient and personalized education.