×
The future of AI supercomputing
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Unlocking the Future of AI Supercomputing with Light

As artificial intelligence (AI) continues to advance, experts agree that the next big leap in the field will depend on building supercomputers on an unprecedented scale. One startup, Lightmatter, has proposed a novel solution to this challenge – connecting GPUs, the crucial chips for AI training, using light instead of electrical signals.

Case in point: Lightmatter’s technology, called Passage, uses optical or photonic interconnects built in silicon to allow its hardware to interface directly with the transistors on a silicon chip like a GPU. This could enable data to move between chips at much higher speeds than is possible today, potentially allowing for distributed AI supercomputers with over a million GPUs running in parallel.

Go deeper: OpenAI CEO Sam Altman, who has sought up to $7 trillion in funding to develop vast quantities of chips for AI, was in attendance at the Sequoia event where Lightmatter pitched its technology. The company claims Passage should allow for more than a million GPUs to run in parallel on the same AI training run, a significant leap from the 20,000 GPUs rumored to have powered OpenAI’s GPT-4.

Why it matters: Upgrading the hardware behind AI advances like ChatGPT could be crucial to future progress in the field, including the elusive goal of artificial general intelligence (AGI). By reducing the bottleneck of converting between electrical and optical signals, Lightmatter’s approach aims to simplify the engineering challenges of maintaining massive AI training runs across thousands of interconnected systems.

The big picture: The chip industry is exploring various ways to increase computing power, with Nvidia’s latest “superchip” design bucking the trend of shrinking chip size. This suggests that innovations in key components like the high-speed interconnects proposed by Lightmatter could become increasingly important for building the next generation of AI supercomputers.

The bottom line: As AI continues to advance, the race is on to develop the hardware capable of powering the most ambitious algorithms. Lightmatter’s approach to using light to connect GPUs could be a significant step towards unlocking the future of AI supercomputing, with potentially far-reaching implications for the field’s progress.

To Build a Better AI Supercomputer, Let There Be Light | WIRED

Recent News

AI trust crucial for unlocking opportunities, says UK MP Victoria Collins

Building public trust in AI technology is essential for the UK to overcome economic stagnation while balancing innovation with ethical safeguards.

AI slashes R&D costs in SaaS, boosting company valuations

AI innovations cut development costs in half for software companies, yet analysis shows this efficiency generates only modest valuation gains compared to revenue growth strategies.

OpenAI’s latest AI model stumbles with embarrassing flaw

OpenAI's new o3 and o4-mini models generate false information at twice the rate of previous versions, raising concerns about the company's emphasis on reasoning abilities over factual accuracy.