Chinese researchers at Peking University have developed a breakthrough analogue computer system that can solve matrix equations—crucial for AI training—up to 1000 times faster than current digital chips while using 100 times less energy. The dual-chip system combines rapid, low-precision calculations with iterative refinement to achieve accuracy matching traditional digital computers, potentially addressing the massive energy consumption plaguing AI data centers.
How it works: The system uses two specialized analogue chips working in tandem to solve matrix equations with unprecedented speed and accuracy.
- The first chip rapidly produces solutions with approximately 1% error rate, while a second chip runs iterative refinement algorithms to analyze and correct these errors.
- After three refinement cycles, accuracy improves to 0.0000001%—matching the precision of standard digital calculations.
- Unlike digital chips that struggle exponentially as matrix size increases, analogue chips maintain consistent solving speed regardless of matrix dimensions.
Current capabilities: The researchers have successfully built chips capable of handling 16 by 16 matrices (256 variables), suitable for smaller computational problems.
- A theoretical 32 by 32 matrix chip would outperform Nvidia’s H100 GPU, one of today’s premium AI training processors.
- However, tackling modern large AI models would require scaling to perhaps million-by-million variable matrices.
The big picture: Matrix calculations form the backbone of AI model training, making this development potentially transformative for the industry’s sustainability challenges.
- “The modern world is built on digital computers… but not everything can necessarily be computed efficiently or fast,” notes James Millen, a researcher at King’s College London.
- Analogue computers excel at specific tasks by design, trading the universal flexibility of digital systems for dramatic performance gains in targeted applications.
Why this matters: The AI boom has created an energy crisis in data centers, with training large models consuming enormous computational resources.
- Current digital systems face exponential scaling challenges as AI models grow larger and more complex.
- This breakthrough could provide a pathway to sustainable AI development by dramatically reducing both time and energy requirements for training.
Reality check: The researchers acknowledge significant limitations that temper immediate expectations.
- “Our chip can only do matrix computations,” explains lead researcher Zhong Sun, noting that real-world applications may require capabilities beyond the circuits’ narrow specialization.
- The most realistic near-term outcome involves hybrid systems where GPUs incorporate specialized analogue circuits for specific computational tasks.
What they’re saying: Sun emphasizes the technology’s targeted nature: “If matrix computation occupies most of the computing task, it represents a very significant acceleration for the problem, but if not, it will be a limited speed-up.”
Looking ahead: Commercial implementation remains years away, with hybrid chip architectures representing the most probable path forward rather than wholesale replacement of existing digital systems.
Analogue computers could train AI 1000 times faster and cut energy use