×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Researchers claim a breakthrough in AI efficiency by eliminating matrix multiplication, a fundamental operation in current neural networks, which could significantly reduce the power consumption and costs of running large language models.

Key Takeaways:

  • Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a method to run AI language models without using matrix multiplication (MatMul), which is currently accelerated by power-hungry GPU chips.
  • Their custom 2.7 billion parameter model achieved similar performance to conventional large language models while consuming far less power when run on an FPGA chip.
  • This development challenges the prevailing paradigm that matrix multiplication is indispensable for building high-performing language models and could make them more accessible, efficient, and sustainable.

Implications for the AI industry: The findings could have significant ramifications for the environmental impact and operational costs of AI systems:

  • GPUs, particularly those from Nvidia, currently dominate the AI hardware market due to their ability to quickly perform matrix multiplication in parallel, but this new approach may disrupt that status quo.
  • By reducing the power consumption of running large language models, this technique could help mitigate concerns about the growing energy footprint of the AI industry as it scales up.
  • Making large language models more efficient could also enable their deployment on resource-constrained devices like smartphones, expanding their potential applications.

Building upon previous work: The researchers cite BitNet, a “1-bit” transformer technique, as an important precursor to their work:

  • BitNet demonstrated the viability of using binary and ternary weights in language models, successfully scaling up to 3 billion parameters while maintaining competitive performance.
  • However, BitNet still relied on matrix multiplications in its self-attention mechanism, which motivated the researchers to develop a completely “MatMul-free” architecture.

Broader Implications:

While the paper has not yet been peer-reviewed, if the claims hold up, this development could mark a significant shift in how AI systems are designed and operated. By fundamentally redesigning the core computational operations of neural networks, the researchers are challenging long-held assumptions about the necessity of matrix multiplication for high-performance AI.

This work opens up new possibilities for more efficient, sustainable, and accessible AI systems. However, key questions remain about the scalability and generalizability of this approach across different types of AI models and real-world applications. Further research and validation will be needed to fully understand the potential impact of this new paradigm.

Researchers upend AI status quo by eliminating matrix multiplication in LLMs

Recent News

Rep.ai Raises $7.5M to Launch ‘Digital Twin’ Sales Reps

The startup's AI avatars aim to provide personalized video interactions with customers, bridging the gap between chatbots and human representatives.

LG Launches Alliance Program to Connect Startups with Strategic Partners

The program aims to foster collaboration between corporations and startups, accelerating the development of new technologies across industries.

Hollywood Giant Lionsgate to Provide Library to Runway for AI Training

The partnership aims to create an AI model using Lionsgate's library, offering new tools for filmmakers while addressing legal concerns about training data.