×
AI Breakthrough: Language Models without Matrix Multiplication, Slashing Power Consumption
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Researchers claim a breakthrough in AI efficiency by eliminating matrix multiplication, a fundamental operation in current neural networks, which could significantly reduce the power consumption and costs of running large language models.

Key Takeaways:

  • Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a method to run AI language models without using matrix multiplication (MatMul), which is currently accelerated by power-hungry GPU chips.
  • Their custom 2.7 billion parameter model achieved similar performance to conventional large language models while consuming far less power when run on an FPGA chip.
  • This development challenges the prevailing paradigm that matrix multiplication is indispensable for building high-performing language models and could make them more accessible, efficient, and sustainable.

Implications for the AI industry: The findings could have significant ramifications for the environmental impact and operational costs of AI systems:

  • GPUs, particularly those from Nvidia, currently dominate the AI hardware market due to their ability to quickly perform matrix multiplication in parallel, but this new approach may disrupt that status quo.
  • By reducing the power consumption of running large language models, this technique could help mitigate concerns about the growing energy footprint of the AI industry as it scales up.
  • Making large language models more efficient could also enable their deployment on resource-constrained devices like smartphones, expanding their potential applications.

Building upon previous work: The researchers cite BitNet, a “1-bit” transformer technique, as an important precursor to their work:

  • BitNet demonstrated the viability of using binary and ternary weights in language models, successfully scaling up to 3 billion parameters while maintaining competitive performance.
  • However, BitNet still relied on matrix multiplications in its self-attention mechanism, which motivated the researchers to develop a completely “MatMul-free” architecture.

Broader Implications:

While the paper has not yet been peer-reviewed, if the claims hold up, this development could mark a significant shift in how AI systems are designed and operated. By fundamentally redesigning the core computational operations of neural networks, the researchers are challenging long-held assumptions about the necessity of matrix multiplication for high-performance AI.

This work opens up new possibilities for more efficient, sustainable, and accessible AI systems. However, key questions remain about the scalability and generalizability of this approach across different types of AI models and real-world applications. Further research and validation will be needed to fully understand the potential impact of this new paradigm.

Researchers upend AI status quo by eliminating matrix multiplication in LLMs

Recent News

MIT research evaluates driver behavior to advance autonomous driving tech

Researchers find driver trust and behavior patterns are more critical to autonomous vehicle adoption than technical capabilities, with acceptance levels showing first uptick in years.

Inside Microsoft’s plan to ensure every business has an AI Agent

Microsoft's shift toward AI assistants marks its largest interface change since the introduction of Windows, as the company integrates automated helpers across its entire software ecosystem.

Chinese AI model LLaVA-o1 rivals OpenAI’s o1 in new study

New open-source AI model from China matches Silicon Valley's best at visual reasoning tasks while making its code freely available to researchers.