Empowering local AI builders with Hybrid Inference
Tip
Hybrid Inference is an experimental feature. Feedback can be provided here.
Hybrid Inference offers a fast and simple way to offload local generation requirements.
- 🚀 Reduced Requirements: Access powerful models without expensive hardware.
- 💎 Without Compromise: Achieve the highest quality without sacrificing performance.
- 💰 Cost Effective: It's free! 🤑
- 🎯 Diverse Use Cases: Fully compatible with Diffusers 🧨 and the wider community.
- 🔧 Developer-Friendly: Simple requests, fast responses.
- VAE Decode 🖼️: Quickly decode latent representations into high-quality images without compromising performance or workflow speed.
- VAE Encode 🔢: Efficiently encode images into latent representations for generation and training.
- Text Encoders 📃 (coming soon): Compute text embeddings for your prompts quickly and accurately, ensuring a smooth and high-quality workflow.
- SD.Next: All-in-one UI with direct supports Hybrid Inference.
- ComfyUI-HFRemoteVae: ComfyUI node for Hybrid Inference.
- March 10 2025: Added VAE encode
- March 2 2025: Initial release with VAE decoding
The documentation is organized into three sections:
- VAE Decode Learn the basics of how to use VAE Decode with Hybrid Inference.
- VAE Encode Learn the basics of how to use VAE Encode with Hybrid Inference.
- API Reference Dive into task-specific settings and parameters.