×
Microsoft’s VALL-E 2 Achieves Human-Level Speech Synthesis, Sparking Ethical Debate
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Microsoft’s VALL-E 2 reaches human parity in text-to-speech synthesis, raising ethical concerns about potential misuse.

Key breakthrough: VALL-E 2, Microsoft’s latest text-to-speech (TTS) generator, has achieved “human parity” for the first time, producing speech indistinguishable from a human voice:

  • The model only needs a few seconds of audio to reproduce a voice that matches or exceeds the quality of human speech when compared to standard speech libraries.
  • VALL-E 2 consistently generates high-quality, natural-sounding speech even for traditionally challenging phrases due to its “Repetition Aware Sampling” and “Grouped Code Modeling” features.

Potential applications and risks: While Microsoft sees beneficial uses for VALL-E 2, such as assisting individuals with speech impairments, the company is keeping the model research-only for now due to risks of misuse:

  • The researchers acknowledge VALL-E 2 could potentially be used maliciously for voice spoofing, impersonation, or generating misleading content.
  • Releasing the model publicly at this stage is considered irresponsible and dangerous given how convincing the generated speech is.
  • OpenAI has placed similar restrictions on some of its voice tech due to the realism of AI-generated content.

Analyzing deeper: VALL-E 2 represents a major leap forward in speech synthesis technology, but also highlights the complex ethical challenges as AI models become increasingly sophisticated:

  • It remains to be seen whether pressure from the intensifying AI race will lead to premature public releases of powerful voice and language models before safeguards are in place.
  • Drawing the line between beneficial and harmful applications of AI will only get more difficult as the technology advances.
  • Robust verification methods, like OpenAI’s deepfake detector, may be critical to combat the spread of misleading synthetic media as these AI models improve.
Microsoft just made an AI voice generator so convincing it's too dangerous to release

Recent News

SAP’s latest customer survey reveals why businesses are slow to adopt AI

Despite widespread interest in artificial intelligence, most large enterprises using SAP software remain in experimental phases, with fewer than 7% achieving company-wide AI deployment.

OpenAI is betting its new ‘deliberative’ technique will keep advanced AI models aligned with humans

OpenAI's new GPT-3 model integrates ethical decision-making protocols directly into its core architecture, signaling a shift from reactive to preventive AI safety measures.

Trump announces new tech policy team with key appointment in AI, digital assets and science policy

Trump taps prominent tech industry executives and former administration officials to shape AI and crypto policy as part of his 2024 campaign strategy.