×
How Meta’s Segment Anything model is advancing the future of digital fashion
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

In a recent blog post on Meta, digital artist Josephine Miller demonstrated how Meta’s Segment Anything 2 model is enabling real-time virtual fashion transformations.

The innovation: Using Meta’s Segment Anything 2 (SAM 2) model and other AI tools, London-based XR creative designer Josephine Miller creates videos where clothing appears to change colors and patterns instantly.

  • Miller showcased the technology in an Instagram post featuring a gold evening gown that transforms through various designs and colors
  • The project aims to demonstrate how digital fashion can reduce reliance on fast fashion while promoting sustainability
  • The process combines ComfyUI, an open source stable diffusion model, with Meta’s SAM 2 for precise object segmentation in videos

Technical implementation: Miller’s workflow requires significant computing power and expertise in multiple AI technologies to achieve seamless virtual clothing transformations.

  • She built a custom computer with an RTX 4090 GPU to handle the processing demands
  • The workflow integrates ComfyUI programming with SAM 2’s segmentation capabilities
  • SAM 2 builds upon its predecessor by extending segmentation features to both images and videos

Evolution of the technology: Meta’s SAM technology has significantly improved the efficiency and accuracy of object segmentation in digital content.

  • Prior to SAM’s 2023 release, object segmentation was time-consuming and often imprecise
  • The technology now enables both interactive and automatic object segmentation
  • Miller reports that SAM has dramatically improved her output quality and workflow efficiency

Creator’s journey: Miller’s expertise in AI-powered creative tools developed through self-directed learning during the COVID-19 pandemic.

  • She began experimenting with AI during lockdown in 2020
  • Her learning process started with text-to-image generation before advancing to video models
  • After two months of experimentation, she developed her current workflow
  • Her expertise has led to collaborations with global brands in augmented, virtual, and mixed reality

Future implications: While current hardware requirements limit widespread adoption, the technology shows promise for democratizing creative digital fashion applications.

  • The process currently requires high-end hardware like an RTX 4090 GPU
  • Miller envisions broader adoption of open source models like SAM for creative applications
  • The technology could help reshape how people interact with fashion in digital spaces while promoting sustainable consumption practices

Looking ahead: As hardware capabilities improve and AI tools become more accessible, technologies like SAM 2 may bridge the gap between traditional fashion consumption and digital expression, potentially catalyzing a shift toward more sustainable fashion practices in both virtual and physical spaces.

Image credit: Meta
How digital artist Josephine Miller uses Meta Segment Anything to help design the future of fashion

Recent News

Apple pulled its hallucinating AI summaries, but improvement (and patience) should be expected

Apple temporarily disables AI news summaries in iOS beta after system generated false reports about deaths and sports results.

Introducing the WeirdML Benchmark: A novel way to tests AI models on unusual tasks

A suite of unusual tests evaluates how AI systems handle tasks beyond standard benchmarks, from interpreting poetry to solving riddles.

AI talent exodus threatens academic research future

Researchers find simpler AI training methods can rival complex ones, potentially lowering barriers to entry for smaller organizations.