×
Written by
Published on
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Unlocking the Future of AI Supercomputing with Light

As artificial intelligence (AI) continues to advance, experts agree that the next big leap in the field will depend on building supercomputers on an unprecedented scale. One startup, Lightmatter, has proposed a novel solution to this challenge – connecting GPUs, the crucial chips for AI training, using light instead of electrical signals.

Case in point: Lightmatter’s technology, called Passage, uses optical or photonic interconnects built in silicon to allow its hardware to interface directly with the transistors on a silicon chip like a GPU. This could enable data to move between chips at much higher speeds than is possible today, potentially allowing for distributed AI supercomputers with over a million GPUs running in parallel.

Go deeper: OpenAI CEO Sam Altman, who has sought up to $7 trillion in funding to develop vast quantities of chips for AI, was in attendance at the Sequoia event where Lightmatter pitched its technology. The company claims Passage should allow for more than a million GPUs to run in parallel on the same AI training run, a significant leap from the 20,000 GPUs rumored to have powered OpenAI’s GPT-4.

Why it matters: Upgrading the hardware behind AI advances like ChatGPT could be crucial to future progress in the field, including the elusive goal of artificial general intelligence (AGI). By reducing the bottleneck of converting between electrical and optical signals, Lightmatter’s approach aims to simplify the engineering challenges of maintaining massive AI training runs across thousands of interconnected systems.

The big picture: The chip industry is exploring various ways to increase computing power, with Nvidia’s latest “superchip” design bucking the trend of shrinking chip size. This suggests that innovations in key components like the high-speed interconnects proposed by Lightmatter could become increasingly important for building the next generation of AI supercomputers.

The bottom line: As AI continues to advance, the race is on to develop the hardware capable of powering the most ambitious algorithms. Lightmatter’s approach to using light to connect GPUs could be a significant step towards unlocking the future of AI supercomputing, with potentially far-reaching implications for the field’s progress.

To Build a Better AI Supercomputer, Let There Be Light | WIRED

Recent News

AI Tutors Double Student Learning in Harvard Study

Students using an AI tutor demonstrated twice the learning gains in half the time compared to traditional lectures, suggesting potential for more efficient and personalized education.

Lionsgate Teams Up With Runway On Custom AI Video Generation Model

The studio aims to develop AI tools for filmmakers using its vast library, raising questions about content creation and creative rights.

How to Successfully Integrate AI into Project Management Practices

AI-powered tools automate routine tasks, analyze data for insights, and enhance decision-making, promising to boost productivity and streamline project management across industries.