×
OpenAI’s Sora is here, but the AI video revolution is still a long way’s off
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

The launch of OpenAI’s Sora video generation model represents a significant development in AI creativity tools, though early testing reveals both impressive capabilities and notable limitations.

Initial rollout and accessibility: OpenAI’s much-anticipated Sora video generator launched with a tiered subscription model and faced immediate access constraints due to overwhelming demand.

  • Account creation was suspended within hours of the launch due to high user interest
  • The “Plus” tier costs $20 monthly, offering 480p or 720p video generation up to 10 seconds
  • The “Pro” tier, priced at $200 monthly, provides access to 1080p quality and 20-second videos

Technical performance and capabilities: The platform demonstrates both promising features and significant limitations in its current iteration.

  • Video generation is relatively quick, typically completing within 30 seconds even for 10-second clips
  • Simple prompts produce better results than complex scene descriptions
  • The system excels at rendering lighting, shadows, and mirror effects
  • Patterns on fur and textiles maintain consistency during movement
  • High detail levels persist even at lower resolutions

Current limitations and challenges: Despite its innovations, Sora exhibits several persistent technical issues that impact its practical utility.

  • Human motion appears unnatural and distorted
  • Complex prompts often result in visual anomalies like extra limbs
  • The Storyboard feature, designed for longer video creation, frequently produces poor results
  • Generated content shows obvious AI artifacts that limit commercial usability

Content moderation and safety measures: OpenAI has implemented various safeguards to prevent misuse and copyright infringement.

  • Political figures like Donald Trump and Kamala Harris are blocked
  • Celebrity names generate generic characters instead of lookalikes
  • Copyrighted characters and brand icons are effectively filtered
  • Violence-related content receives inconsistent moderation
  • Reference image uploads require rights verification and Pro-tier subscription for human subjects

Competitive landscape: Early comparisons suggest Sora outperforms some existing solutions while sharing limitations with others.

  • Produces more realistic results than Runway AI when using identical prompts
  • Matches Adobe Firefly Video Model’s quality but lacks commercial safety guarantees
  • Positions itself competitively in terms of photorealism and visual consistency

Market implications and current limitations: While innovative, Sora’s current iteration suggests a measured timeline for widespread adoption.

  • The high subscription cost for advanced features rivals traditional video production tools
  • Early applications may be limited to short-form content and simple scenes
  • The platform’s current state indicates significant development is still needed before it can reliably produce professional-quality content for commercial use
  • The technology’s accessibility at lower price points may contribute to an increase in AI-generated content targeting specific audiences, such as children’s videos on YouTube
Sora’s AI video revolution is still a ways off

Recent News

New framework prevents AI agents from taking unsafe actions in enterprise settings

The framework provides runtime guardrails that intercept unsafe AI agent actions while preserving core functionality, addressing a key barrier to enterprise adoption.

Leaked database reveals China’s AI-powered censorship system targeting political content

The leaked database exposes how China is using advanced language models to automatically identify and censor indirect references to politically sensitive topics beyond traditional keyword filtering.

Study: Anthropic uncovers neural circuits behind AI hallucinations

Anthropic researchers have identified specific neural pathways that determine when AI models fabricate information versus admitting uncertainty, offering new insights into the mechanics behind artificial intelligence hallucinations.