×
The future of AI supercomputing
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Unlocking the Future of AI Supercomputing with Light

As artificial intelligence (AI) continues to advance, experts agree that the next big leap in the field will depend on building supercomputers on an unprecedented scale. One startup, Lightmatter, has proposed a novel solution to this challenge – connecting GPUs, the crucial chips for AI training, using light instead of electrical signals.

Case in point: Lightmatter’s technology, called Passage, uses optical or photonic interconnects built in silicon to allow its hardware to interface directly with the transistors on a silicon chip like a GPU. This could enable data to move between chips at much higher speeds than is possible today, potentially allowing for distributed AI supercomputers with over a million GPUs running in parallel.

Go deeper: OpenAI CEO Sam Altman, who has sought up to $7 trillion in funding to develop vast quantities of chips for AI, was in attendance at the Sequoia event where Lightmatter pitched its technology. The company claims Passage should allow for more than a million GPUs to run in parallel on the same AI training run, a significant leap from the 20,000 GPUs rumored to have powered OpenAI’s GPT-4.

Why it matters: Upgrading the hardware behind AI advances like ChatGPT could be crucial to future progress in the field, including the elusive goal of artificial general intelligence (AGI). By reducing the bottleneck of converting between electrical and optical signals, Lightmatter’s approach aims to simplify the engineering challenges of maintaining massive AI training runs across thousands of interconnected systems.

The big picture: The chip industry is exploring various ways to increase computing power, with Nvidia’s latest “superchip” design bucking the trend of shrinking chip size. This suggests that innovations in key components like the high-speed interconnects proposed by Lightmatter could become increasingly important for building the next generation of AI supercomputers.

The bottom line: As AI continues to advance, the race is on to develop the hardware capable of powering the most ambitious algorithms. Lightmatter’s approach to using light to connect GPUs could be a significant step towards unlocking the future of AI supercomputing, with potentially far-reaching implications for the field’s progress.

To Build a Better AI Supercomputer, Let There Be Light | WIRED

Recent News

Veo 2 vs. Sora: A closer look at Google and OpenAI’s latest AI video tools

Tech companies unveil AI tools capable of generating realistic short videos from text prompts, though length and quality limitations persist as major hurdles.

7 essential ways to use ChatGPT’s new mobile search feature

OpenAI's mobile search upgrade enables business users to access current market data and news through conversational queries, marking a departure from traditional search methods.

FastVideo is an open-source framework that accelerates video diffusion models

New optimization techniques reduce the computing power needed for AI video generation from days to hours, though widespread adoption remains limited by hardware costs.