Runway's new Reference feature is set to transform how creatives approach AI video generation. This powerful addition, recently showcased to Gen 48 competition participants, allows users to maintain consistency in characters, environments, and styling across multiple generations—addressing one of the most persistent challenges in AI video production workflows.
The most significant advancement here is Runway's approach to referencing existing assets. While other AI tools have implemented similar features, Runway's implementation stands apart with its two-stage workflow. By first generating still images that incorporate your reference materials, then allowing you to select which version to animate, the platform reduces the frustration of repeatedly regenerating full videos when character consistency fails.
This matters tremendously for production teams who need predictable outputs. The current AI video ecosystem is plagued by inconsistency issues—a character's appearance changes between shots, clothing transforms unexpectedly, or environments shift. These problems make AI tools unusable for serious projects requiring continuity. By addressing this fundamental limitation, Runway is positioning itself as the professional's choice rather than just another AI toy.
What's notably missing from the discussion is how these tools will integrate with enterprise workflows. Large creative departments and agencies don't just need good tools—they need systems that function within their existing pipelines. Here's where the opportunity lies:
For businesses serious about incorporating AI video, developing clear asset management strategies is essential. Creating libraries of approved reference images (executives, products, brand environments) will be as important as traditional brand guides. Companies should begin cataloging these assets now, even before their AI video strategies are fully formed.
Additionally, the BBC's Agatha Christie project demonstrates how