×
Stability just launched Stable Diffusion 3.5 in big move for open-source AI art
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new era for text-to-image AI: Stability AI has launched Stable Diffusion 3.5, a significant update to its open-source text-to-image generative AI technology, aiming to reclaim leadership in the competitive field.

  • The release introduces three new model variants: Stable Diffusion 3.5 Large (8 billion parameters), Large Turbo (a faster version), and Medium (2.6 billion parameters for edge computing).
  • All models are available under the Stability AI Community License, allowing free non-commercial use and commercial use for entities with annual revenue under $1 million.
  • Enterprise licenses are available for larger deployments, with models accessible via Stability AI’s API and Hugging Face.

Addressing previous shortcomings: Stability AI’s CTO, Hanno Basse, acknowledged that the June release of Stable Diffusion 3 Medium fell short of expectations, prompting a thorough analysis and improvements for the 3.5 update.

  • The company identified suboptimal model and dataset choices in the previous version, particularly for the smaller-sized Medium model.
  • Stability AI innovated on architecture and training protocols to achieve a better balance between model size and output quality.

Technical innovations: Stable Diffusion 3.5 incorporates several novel techniques to enhance performance and quality.

  • Query-Key Normalization has been integrated into transformer blocks, facilitating easier fine-tuning and further development by end-users.
  • The Multimodal Diffusion Transformer (MMDiT-X) architecture has been enhanced, particularly for the medium model, improving image quality and multi-resolution generation capabilities.

Improved prompt adherence: A key feature of Stable Diffusion 3.5 Large is its superior prompt adherence compared to market competitors.

  • Better dataset curation, captioning, and innovative training protocols contribute to the model’s improved ability to accurately interpret and render user prompts.

Future developments: Stability AI plans to release ControlNets capability for Stable Diffusion 3.5, building on technology introduced in the SDXL 1.0 release in July 2023.

  • ControlNets will offer more control for professional use cases, such as upscaling images while maintaining overall colors or creating images that follow specific depth patterns.

Competitive landscape: The update comes as Stability AI faces increasing competition in the text-to-image generative AI space.

Analyzing deeper: While Stable Diffusion 3.5 represents a significant advancement in open-source text-to-image AI, its long-term impact on the competitive landscape remains to be seen. The focus on customization and prompt adherence addresses key user demands, but the rapid pace of innovation in this field means that maintaining a leadership position will require continuous improvement and adaptation to emerging technologies and user needs.

Stable Diffusion 3.5 debuts as Stability AI aims to improve open models for generating images

Recent News

How the rise of small AI models is redefining the AI race

Purpose-built, smaller AI models deliver similar results to their larger counterparts while using a fraction of the computing power and cost.

London Book Fair to focus on AI integration and declining literacy rates

Publishing industry convenes to address AI integration and youth readership challenges amid strong international rights trading.

AI takes center stage at HPA Tech Retreat as entertainment execs ponder future of industry

Studios race to buy AI companies and integrate machine learning into film production, despite concerns over creative control and job security.