back
Get SIGNAL/NOISE in your inbox daily

Synthesia has unveiled Express-2, its latest AI avatar technology that creates hyperrealistic digital clones with more natural movements, expressive voices, and improved accent preservation. The advancement marks a significant leap toward AI avatars that are nearly indistinguishable from real humans, with the company planning to introduce interactive capabilities that will allow these digital clones to engage in real-time conversations.

What you should know: The new Express-2 model represents a dramatic improvement over previous AI avatar technology, addressing many of the uncanny valley issues that plagued earlier versions.

  • Previous avatars suffered from jerky movements, accent slippage, and mismatched facial expressions that didn’t align with vocal emotions.
  • Express-2 uses a multi-billion parameter rendering model—significantly more powerful than its predecessor’s few hundred million parameters.
  • The system now automatically learns emotional associations without needing to see specific expressions during training, thanks to training on “much more diverse data and much larger data sets.”

How it works: Express-2 employs four distinct AI models working in concert to create lifelike avatars.

  • A voice cloning model preserves the speaker’s accent, intonation, and expressiveness—avoiding the generic American-sounding voices produced by other systems.
  • A gesture generation model creates hand movements and body language to accompany speech.
  • An evaluation model assesses multiple versions of generated motion and selects the best alignment with input audio.

In plain English: Think of it like a movie production team where each specialist handles one job—one person captures your voice perfectly, another choreographs your hand gestures, a third director picks the best takes, and a final editor puts it all together into a seamless performance.

The creation process: Recording an avatar now takes just one hour, compared to the lengthy calibration sessions required previously.

  • Users read a script while gesturing naturally, with the system capturing all necessary footage in a single session.
  • The process has been “significantly streamlined” from earlier versions that required reading scripts in different emotional states and mouthing specific sounds.
  • Within weeks, users receive their completed avatar capable of delivering any script with their voice and mannerisms.

Why this matters: The technology is rapidly approaching a threshold where distinguishing AI-generated content from reality becomes increasingly difficult.

  • Anna Eiserbeck, a postdoctoral psychology researcher at Humboldt University of Berlin, said she “isn’t sure she’d have been able to identify my avatar as a deepfake at first glance.”
  • The improvements suggest we’re approaching a future where AI avatars could be used for both legitimate business purposes and potentially deceptive applications.
  • Corporate clients can now create “slicker presenters” for financial results, internal communications, and training videos.

What’s coming next: Synthesia is developing interactive avatars that can engage in real-time conversations, essentially combining ChatGPT-like capabilities with realistic human faces.

  • Future avatars will “understand” conversations and respond in real time, allowing users to ask questions or request clarification.
  • Educational content could be customized to individual knowledge levels—for example, adjusting biology videos based on whether the viewer has a degree or high school-level understanding.
  • The company recently partnered with Google to integrate the Veo 3 video model, enabling avatars to appear in detailed, changeable environments.

Competitive landscape: Multiple companies are racing to perfect AI avatar technology for various applications.

  • Competitors include Yuzu Labs, Creatify, Arcdads, and Vidyard, all targeting business video creation.
  • In China, AI-generated clones of livestreamers have become popular because they can “sell products 24/7 without getting tired or needing to be paid.”
  • Synthesia remains “laser focused” on corporate applications but isn’t ruling out expansion into entertainment or education sectors.

Potential concerns: Experts warn that hyperrealistic interactive avatars could create new forms of AI addiction and relationship dependency.

  • “If you make the system too realistic, people might start forming certain kinds of relationships with these characters,” says Pat Pataranutaporn from MIT Media Lab.
  • Björn Schuller from Imperial College London predicts future avatars will be “perfectly optimized to adjust their projected levels of emotion and charisma” to keep audiences engaged.
  • The technology raises questions about authenticity and the potential for misuse, as avatars become sophisticated enough to fool most viewers.

What they’re saying: Industry experts acknowledge both the impressive technical achievements and the deeper implications.

  • “There’s a lot to consider to get right; you have to have the right micro gesture, the right intonation, the sound of voice and the right word,” explains Schuller. “I don’t want an AI [avatar] to frown at the wrong moment—that could send an entirely different message.”
  • Youssef Alami Mejjati, Synthesia’s head of research, emphasizes the educational potential: “We really want to make the best learning experience, and that means through video that’s entertaining but also personalized and interactive.”

Recent Stories

Oct 17, 2025

DOE fusion roadmap targets 2030s commercial deployment as AI drives $9B investment

The Department of Energy has released a new roadmap targeting commercial-scale fusion power deployment by the mid-2030s, though the plan lacks specific funding commitments and relies on scientific breakthroughs that have eluded researchers for decades. The strategy emphasizes public-private partnerships and positions AI as both a research tool and motivation for developing fusion energy to meet data centers' growing electricity demands. The big picture: The DOE's roadmap aims to "deliver the public infrastructure that supports the fusion private sector scale up in the 2030s," but acknowledges it cannot commit to specific funding levels and remains subject to Congressional appropriations. Why...

Oct 17, 2025

Tying it all together: Credo’s purple cables power the $4B AI data center boom

Credo, a Silicon Valley semiconductor company specializing in data center cables and chips, has seen its stock price more than double this year to $143.61, following a 245% surge in 2024. The company's signature purple cables, which cost between $300-$500 each, have become essential infrastructure for AI data centers, positioning Credo to capitalize on the trillion-dollar AI infrastructure expansion as hyperscalers like Amazon, Microsoft, and Elon Musk's xAI rapidly build out massive computing facilities. What you should know: Credo's active electrical cables (AECs) are becoming indispensable for connecting the massive GPU clusters required for AI training and inference. The company...

Oct 17, 2025

Vatican launches Latin American AI network for human development

The Vatican hosted a two-day conference bringing together 50 global experts to explore how artificial intelligence can advance peace, social justice, and human development. The event launched the Latin American AI Network for Integral Human Development and established principles for ethical AI governance that prioritize human dignity over technological advancement. What you should know: The Pontifical Academy of Social Sciences, the Vatican's research body for social issues, organized the "Digital Rerum Novarum" conference on October 16-17, combining academic research with practical AI applications. Participants included leading experts from MIT, Microsoft, Columbia University, the UN, and major European institutions. The conference...