×
AI-Powered Headphones Let You Focus on One Voice in a Noisy Crowd
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Driving The News

University of Washington researchers have developed an AI system called “Target Speech Hearing” (TSH) that allows headphone users to selectively listen to a specific person in a noisy environment. By looking at the desired speaker for 3-5 seconds, the system learns their vocal patterns and cancels out all other sounds, playing back only the enrolled speaker’s voice in real-time.

Why It Matters

This technology has the potential to significantly improve the listening experience for headphone users in noisy environments, such as crowded spaces or busy workplaces. It could also have applications in hearing aids, allowing users to focus on specific speakers without the distraction of background noise.

How It Works

  1. The user, wearing off-the-shelf headphones with microphones, taps a button while looking at the desired speaker for 3-5 seconds.
  2. The speaker’s voice reaches the microphones on both sides of the headset simultaneously (with a 16-degree margin of error).
  3. The headphones send the signal to an on-board embedded computer, where machine learning software learns the speaker’s vocal patterns.
  4. The system latches onto the enrolled speaker’s voice and continues to play it back to the listener, even as they move around.

The Big Picture

This research builds on the team’s previous work on “semantic hearing,” which allowed users to select specific sound classes (e.g., birds or voices) to hear while canceling other sounds. The TSH system currently has limitations, such as enrolling only one speaker at a time and requiring a clear line of sight to the speaker during enrollment. However, the team is working on expanding the system to earbuds and hearing aids in the future.

Most Interesting Idea

The most intriguing aspect of this research is the potential to change how we perceive and interact with our auditory environment. By giving users the ability to selectively focus on specific speakers or sounds, this technology could transform the way we communicate and engage with others in noisy settings. It also highlights the growing role of AI in modifying and enhancing our sensory experiences based on individual preferences.

Recent News

MIT research evaluates driver behavior to advance autonomous driving tech

Researchers find driver trust and behavior patterns are more critical to autonomous vehicle adoption than technical capabilities, with acceptance levels showing first uptick in years.

Inside Microsoft’s plan to ensure every business has an AI Agent

Microsoft's shift toward AI assistants marks its largest interface change since the introduction of Windows, as the company integrates automated helpers across its entire software ecosystem.

Chinese AI model LLaVA-o1 rivals OpenAI’s o1 in new study

New open-source AI model from China matches Silicon Valley's best at visual reasoning tasks while making its code freely available to researchers.