Skip to content

Latest commit

 

History

History
60 lines (37 loc) · 2.39 KB

overview.md

File metadata and controls

60 lines (37 loc) · 2.39 KB

Hybrid Inference

Empowering local AI builders with Hybrid Inference

Tip

Hybrid Inference is an experimental feature. Feedback can be provided here.

Why use Hybrid Inference?

Hybrid Inference offers a fast and simple way to offload local generation requirements.

  • 🚀 Reduced Requirements: Access powerful models without expensive hardware.
  • 💎 Without Compromise: Achieve the highest quality without sacrificing performance.
  • 💰 Cost Effective: It's free! 🤑
  • 🎯 Diverse Use Cases: Fully compatible with Diffusers 🧨 and the wider community.
  • 🔧 Developer-Friendly: Simple requests, fast responses.

Available Models

  • VAE Decode 🖼️: Quickly decode latent representations into high-quality images without compromising performance or workflow speed.
  • VAE Encode 🔢: Efficiently encode images into latent representations for generation and training.
  • Text Encoders 📃 (coming soon): Compute text embeddings for your prompts quickly and accurately, ensuring a smooth and high-quality workflow.

Integrations

Changelog

  • March 10 2025: Added VAE encode
  • March 2 2025: Initial release with VAE decoding

Contents

The documentation is organized into three sections:

  • VAE Decode Learn the basics of how to use VAE Decode with Hybrid Inference.
  • VAE Encode Learn the basics of how to use VAE Encode with Hybrid Inference.
  • API Reference Dive into task-specific settings and parameters.
close