×
AI Breakthrough: Language Models without Matrix Multiplication, Slashing Power Consumption
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Researchers claim a breakthrough in AI efficiency by eliminating matrix multiplication, a fundamental operation in current neural networks, which could significantly reduce the power consumption and costs of running large language models.

Key Takeaways:

  • Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a method to run AI language models without using matrix multiplication (MatMul), which is currently accelerated by power-hungry GPU chips.
  • Their custom 2.7 billion parameter model achieved similar performance to conventional large language models while consuming far less power when run on an FPGA chip.
  • This development challenges the prevailing paradigm that matrix multiplication is indispensable for building high-performing language models and could make them more accessible, efficient, and sustainable.

Implications for the AI industry: The findings could have significant ramifications for the environmental impact and operational costs of AI systems:

  • GPUs, particularly those from Nvidia, currently dominate the AI hardware market due to their ability to quickly perform matrix multiplication in parallel, but this new approach may disrupt that status quo.
  • By reducing the power consumption of running large language models, this technique could help mitigate concerns about the growing energy footprint of the AI industry as it scales up.
  • Making large language models more efficient could also enable their deployment on resource-constrained devices like smartphones, expanding their potential applications.

Building upon previous work: The researchers cite BitNet, a “1-bit” transformer technique, as an important precursor to their work:

  • BitNet demonstrated the viability of using binary and ternary weights in language models, successfully scaling up to 3 billion parameters while maintaining competitive performance.
  • However, BitNet still relied on matrix multiplications in its self-attention mechanism, which motivated the researchers to develop a completely “MatMul-free” architecture.

Broader Implications:

While the paper has not yet been peer-reviewed, if the claims hold up, this development could mark a significant shift in how AI systems are designed and operated. By fundamentally redesigning the core computational operations of neural networks, the researchers are challenging long-held assumptions about the necessity of matrix multiplication for high-performance AI.

This work opens up new possibilities for more efficient, sustainable, and accessible AI systems. However, key questions remain about the scalability and generalizability of this approach across different types of AI models and real-world applications. Further research and validation will be needed to fully understand the potential impact of this new paradigm.

Researchers upend AI status quo by eliminating matrix multiplication in LLMs

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.