×
PyTorch’s releases ‘torchao’ to boost AI model performance
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

PyTorch Introduces torchao: Boosting Model Performance with Advanced Optimization Techniques: PyTorch has officially launched torchao, a native library designed to enhance model speed and reduce size through low-bit data types, quantization, and sparsity, offering significant improvements for both inference and training workflows.

Key features and performance gains:

  • torchao provides a toolkit of optimization techniques written primarily in PyTorch code, making it accessible and easy to implement.
  • The library has been benchmarked on popular GenAI models, including LLama 3 and Diffusion models, with minimal accuracy loss.

Impressive results for LLama 3:

  • 97% speedup for LLama 3 8B inference using autoquant with int4 weight-only quantization and hqq
  • 73% peak VRAM reduction for LLama 3.1 8B inference at 128K context length with a quantized KV cache
  • 50% speedup for LLama 3 70B pretraining using float8 training on H100
  • 30% peak VRAM reduction for LLama 3 8B using 4-bit quantized optimizers

Diffusion model inference improvements:

  • 53% speedup using float8 dynamic quantization inference with float8 row-wise scaling on flux1.dev on H100
  • 50% reduction in model VRAM for CogVideoX using int8 dynamic quantization

Inference optimization techniques:

  • torchao offers flexible quantization options for both memory-bound and compute-bound models.
  • The library provides a simple API for applying quantization to PyTorch models containing nn.Linear layers.
  • An “autoquant” feature automatically selects the best quantization method for each layer in a model.

Training optimizations:

  • Easy-to-use workflows for reducing precision in training compute and distributed communications.
  • One-liner function to convert training computations to float8 precision.
  • Prototype support for 8-bit and 4-bit optimizers as drop-in replacements for AdamW.

Integrations and collaborations:

  • torchao has been integrated with popular projects such as Hugging Face Transformers, diffusers, HQQ, torchtune, torchchat, and SGLang.
  • The library benefits from collaborations with leading open-source contributors in the field.

Future developments and community engagement: The torchao team is exploring further optimizations, including sub-4-bit precision, performant kernels for high-throughput inference, and support for additional hardware backends. The project welcomes contributions and engagement from the community.

Analyzing deeper: Implications for AI development: The introduction of torchao represents a significant step forward in making advanced AI models more accessible and efficient. By dramatically reducing computational requirements and memory usage, this library has the potential to democratize access to state-of-the-art AI technologies. However, as models become more efficient and widespread, it will be crucial to monitor and address potential ethical concerns related to the increased ease of deploying powerful AI systems.

PyTorch Native Architecture Optimization: torchao

Recent News

Nvidia’s new AI agents can search and summarize huge quantities of visual data

NVIDIA's new AI Blueprint combines computer vision and generative AI to enable efficient analysis of video and image content, with potential applications across industries and smart city initiatives.

How Boulder schools balance AI innovation with student data protection

Colorado school districts embrace AI in classrooms, focusing on ethical use and data privacy while preparing students for a tech-driven future.

Microsoft Copilot Vision nears launch — here’s what we know right now

Microsoft's new AI feature can analyze on-screen content, offering contextual assistance without the need for additional searches or explanations.