×
How DeepMind is pushing the frontiers of AI audio generation
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Advancing speech generation technology: Google researchers have made significant strides in developing more natural and dynamic audio generation models, paving the way for enhanced digital experiences and AI-powered tools.

  • The team has created models capable of generating high-quality, natural speech from various inputs, including text, tempo controls, and specific voices.
  • This technology is already being implemented in several Google products and experiments, such as Gemini Live, Project Astra, Journey Voices, and YouTube’s auto dubbing feature.
  • Recent advancements have enabled the generation of long-form, multi-speaker dialogue, making complex content more accessible.

Key innovations in audio generation:

  • SoundStorm: This research demonstrated the ability to generate 30-second segments of natural dialogue between multiple speakers.
  • SoundStream: A neural audio codec that efficiently compresses and decompresses audio inputs without compromising quality.
  • AudioLM: Treats audio generation as a language modeling task, allowing for flexible handling of various sounds without architectural adjustments.

Scaling multi-speaker models: The latest speech generation technology can produce two minutes of dialogue with improved naturalness, speaker consistency, and acoustic quality.

  • A more efficient speech codec compresses audio into a sequence of tokens at as low as 600 bits per second.
  • A specialized Transformer architecture efficiently handles hierarchies of information, matching the structure of acoustic tokens.
  • The model generates over 5000 tokens to produce a 2-minute dialogue, all within a single autoregressive inference pass.

Training and fine-tuning process:

  • The model was pretrained on hundreds of thousands of hours of speech data.
  • Fine-tuning was performed on a smaller dataset of high-quality dialogue with precise speaker annotations and realistic disfluencies.
  • This approach enabled the model to reliably switch between speakers and output studio-quality audio with natural pauses, tone, and timing.

Responsible AI implementation: In line with Google’s AI Principles, the team is incorporating SynthID technology to watermark non-transient AI-generated audio content, helping to safeguard against potential misuse.

Future directions and applications:

  • Researchers are focused on improving the model’s fluency, acoustic quality, and adding more fine-grained controls for features like prosody.
  • The team is exploring how to combine these advances with other modalities, such as video.
  • Potential applications include enhancing learning experiences and making content more universally accessible.

Broader implications: As speech generation technology continues to evolve, it has the potential to revolutionize how people interact with digital assistants and AI tools, making them more natural and intuitive. This advancement could lead to more engaging and accessible digital experiences across various sectors, from education to entertainment. However, the development of such powerful audio generation capabilities also raises important questions about authenticity and the potential for misuse, highlighting the need for continued focus on responsible AI development and deployment.

Pushing the frontiers of audio generation

Recent News

AI agents and the rise of Hybrid Organizations

Meta makes its improved AI image generator free to use while adding visible watermarks and daily limits to prevent misuse.

Adobe partnership brings AI creativity tools to Box’s content management platform

Box users can now access Adobe's AI-powered editing tools directly within their secure storage environment, eliminating the need to download files or switch between platforms.

Nvidia’s new ACE platform aims to bring more AI to games, but not everyone’s sold

Gaming companies are racing to integrate AI features into mainstream titles, but high hardware requirements and artificial interactions may limit near-term adoption.