×
The future of AI supercomputing
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Unlocking the Future of AI Supercomputing with Light

As artificial intelligence (AI) continues to advance, experts agree that the next big leap in the field will depend on building supercomputers on an unprecedented scale. One startup, Lightmatter, has proposed a novel solution to this challenge – connecting GPUs, the crucial chips for AI training, using light instead of electrical signals.

Case in point: Lightmatter’s technology, called Passage, uses optical or photonic interconnects built in silicon to allow its hardware to interface directly with the transistors on a silicon chip like a GPU. This could enable data to move between chips at much higher speeds than is possible today, potentially allowing for distributed AI supercomputers with over a million GPUs running in parallel.

Go deeper: OpenAI CEO Sam Altman, who has sought up to $7 trillion in funding to develop vast quantities of chips for AI, was in attendance at the Sequoia event where Lightmatter pitched its technology. The company claims Passage should allow for more than a million GPUs to run in parallel on the same AI training run, a significant leap from the 20,000 GPUs rumored to have powered OpenAI’s GPT-4.

Why it matters: Upgrading the hardware behind AI advances like ChatGPT could be crucial to future progress in the field, including the elusive goal of artificial general intelligence (AGI). By reducing the bottleneck of converting between electrical and optical signals, Lightmatter’s approach aims to simplify the engineering challenges of maintaining massive AI training runs across thousands of interconnected systems.

The big picture: The chip industry is exploring various ways to increase computing power, with Nvidia’s latest “superchip” design bucking the trend of shrinking chip size. This suggests that innovations in key components like the high-speed interconnects proposed by Lightmatter could become increasingly important for building the next generation of AI supercomputers.

The bottom line: As AI continues to advance, the race is on to develop the hardware capable of powering the most ambitious algorithms. Lightmatter’s approach to using light to connect GPUs could be a significant step towards unlocking the future of AI supercomputing, with potentially far-reaching implications for the field’s progress.

To Build a Better AI Supercomputer, Let There Be Light | WIRED

Recent News

Moody’s flags risks in Oracle’s massive $300B AI infrastructure bet

Most of the half-trillion-dollar revenue hinges on OpenAI's continued success.

Hong Kong goes long on AI, plans for deployment in 200 public services by 2027

A new AI Efficacy Enhancement Team will guide the ambitious digital transformation effort.

Raising the lumbar: AI slashes spine modeling time from 24 hours to 30 minutes

Digital spine twins can now predict surgical complications before doctors pick up a scalpel.