Diffusers documentation

Getting Started: VAE Decode with Hybrid Inference

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Getting Started: VAE Decode with Hybrid Inference

VAE decode is an essential component of diffusion models - turning latent representations into images or videos.

Memory

These tables demonstrate the VRAM requirements for VAE decode with SD v1 and SD XL on different GPUs.

For the majority of these GPUs the memory usage % dictates other models (text encoders, UNet/Transformer) must be offloaded, or tiled decoding has to be used which increases time taken and impacts quality.

SD v1.5
GPUResolutionTime (seconds)Memory (%)Tiled Time (secs)Tiled Memory (%)
NVIDIA GeForce RTX 4090512x5120.0315.60%0.031 (0%)5.60%
NVIDIA GeForce RTX 40901024x10240.14820.00%0.301 (+103%)5.60%
NVIDIA GeForce RTX 4080512x5120.058.40%0.050 (0%)8.40%
NVIDIA GeForce RTX 40801024x10240.22430.00%0.356 (+59%)8.40%
NVIDIA GeForce RTX 4070 Ti512x5120.06611.30%0.066 (0%)11.30%
NVIDIA GeForce RTX 4070 Ti1024x10240.28440.50%0.454 (+60%)11.40%
NVIDIA GeForce RTX 3090512x5120.0625.20%0.062 (0%)5.20%
NVIDIA GeForce RTX 30901024x10240.25318.50%0.464 (+83%)5.20%
NVIDIA GeForce RTX 3080512x5120.0712.80%0.070 (0%)12.80%
NVIDIA GeForce RTX 30801024x10240.28645.30%0.466 (+63%)12.90%
NVIDIA GeForce RTX 3070512x5120.10215.90%0.102 (0%)15.90%
NVIDIA GeForce RTX 30701024x10240.42156.30%0.746 (+77%)16.00%
SDXL
GPUResolutionTime (seconds)Memory Consumed (%)Tiled Time (seconds)Tiled Memory (%)
NVIDIA GeForce RTX 4090512x5120.05710.00%0.057 (0%)10.00%
NVIDIA GeForce RTX 40901024x10240.25635.50%0.257 (+0.4%)35.50%
NVIDIA GeForce RTX 4080512x5120.09215.00%0.092 (0%)15.00%
NVIDIA GeForce RTX 40801024x10240.40653.30%0.406 (0%)53.30%
NVIDIA GeForce RTX 4070 Ti512x5120.12120.20%0.120 (-0.8%)20.20%
NVIDIA GeForce RTX 4070 Ti1024x10240.51972.00%0.519 (0%)72.00%
NVIDIA GeForce RTX 3090512x5120.10710.50%0.107 (0%)10.50%
NVIDIA GeForce RTX 30901024x10240.45938.00%0.460 (+0.2%)38.00%
NVIDIA GeForce RTX 3080512x5120.12125.60%0.121 (0%)25.60%
NVIDIA GeForce RTX 30801024x10240.52493.00%0.524 (0%)93.00%
NVIDIA GeForce RTX 3070512x5120.18331.80%0.183 (0%)31.80%
NVIDIA GeForce RTX 30701024x10240.79496.40%0.794 (0%)96.40%

Available VAEs

EndpointModel
Stable Diffusion v1https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloudstabilityai/sd-vae-ft-mse
Stable Diffusion XLhttps://x2dmsqunjd6k9prw.us-east-1.aws.endpoints.huggingface.cloudmadebyollin/sdxl-vae-fp16-fix
Fluxhttps://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloudblack-forest-labs/FLUX.1-schnell
HunyuanVideohttps://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloudhunyuanvideo-community/HunyuanVideo

Model support can be requested here.

Code

Install diffusers from main to run the code: pip install git+https://github.com/huggingface/diffusers@main

A helper method simplifies interacting with Hybrid Inference.

from diffusers.utils.remote_utils import remote_decode

Basic example

Here, we show how to use the remote VAE on random tensors.

Code
image = remote_decode( endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/", tensor=torch.randn([1, 4, 64, 64], dtype=torch.float16), scaling_factor=0.18215, )

Usage for Flux is slightly different. Flux latents are packed so we need to send the height and width.

Code
image = remote_decode( endpoint="https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud/", tensor=torch.randn([1, 4096, 64], dtype=torch.float16), height=1024, width=1024, scaling_factor=0.3611, shift_factor=0.1159, )

Finally, an example for HunyuanVideo.

Code
video = remote_decode( endpoint="https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud/", tensor=torch.randn([1, 16, 3, 40, 64], dtype=torch.float16), output_type="mp4", ) withopen("video.mp4", "wb") as f: f.write(video)

Generation

But we want to use the VAE on an actual pipeline to get an actual image, not random noise. The example below shows how to do it with SD v1.5.

Code
from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", vae=None, ).to("cuda") prompt = "Strawberry ice cream, in a stylish modern glass, coconut, splashing milk cream and honey, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious" latent = pipe( prompt=prompt, output_type="latent", ).images image = remote_decode( endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/", tensor=latent, scaling_factor=0.18215, ) image.save("test.jpg")

Here’s another example with Flux.

Code
from diffusers import FluxPipeline pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16, vae=None, ).to("cuda") prompt = "Strawberry ice cream, in a stylish modern glass, coconut, splashing milk cream and honey, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious" latent = pipe( prompt=prompt, guidance_scale=0.0, num_inference_steps=4, output_type="latent", ).images image = remote_decode( endpoint="https://whhx50ex1aryqvw6.us-east-1.aws.endpoints.huggingface.cloud/", tensor=latent, height=1024, width=1024, scaling_factor=0.3611, shift_factor=0.1159, ) image.save("test.jpg")

Here’s an example with HunyuanVideo.

Code
from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel model_id = "hunyuanvideo-community/HunyuanVideo" transformer = HunyuanVideoTransformer3DModel.from_pretrained( model_id, subfolder="transformer", torch_dtype=torch.bfloat16 ) pipe = HunyuanVideoPipeline.from_pretrained( model_id, transformer=transformer, vae=None, torch_dtype=torch.float16 ).to("cuda") latent = pipe( prompt="A cat walks on the grass, realistic", height=320, width=512, num_frames=61, num_inference_steps=30, output_type="latent", ).frames video = remote_decode( endpoint="https://o7ywnmrahorts457.us-east-1.aws.endpoints.huggingface.cloud/", tensor=latent, output_type="mp4", ) ifisinstance(video, bytes): withopen("video.mp4", "wb") as f: f.write(video)

Queueing

One of the great benefits of using a remote VAE is that we can queue multiple generation requests. While the current latent is being processed for decoding, we can already queue another one. This helps improve concurrency.

Code
import queue import threading from IPython.display import display from diffusers import StableDiffusionPipeline defdecode_worker(q: queue.Queue): whileTrue: item = q.get() if item isNone: break image = remote_decode( endpoint="https://q1bj3bpq6kzilnsu.us-east-1.aws.endpoints.huggingface.cloud/", tensor=item, scaling_factor=0.18215, ) display(image) q.task_done() q = queue.Queue() thread = threading.Thread(target=decode_worker, args=(q,), daemon=True) thread.start() defdecode(latent: torch.Tensor): q.put(latent) prompts = [ "Blueberry ice cream, in a stylish modern glass , ice cubes, nuts, mint leaves, splashing milk cream, in a gradient purple background, fluid motion, dynamic movement, cinematic lighting, Mysterious", "Lemonade in a glass, mint leaves, in an aqua and white background, flowers, ice cubes, halo, fluid motion, dynamic movement, soft lighting, digital painting, rule of thirds composition, Art by Greg rutkowski, Coby whitmore", "Comic book art, beautiful, vintage, pastel neon colors, extremely detailed pupils, delicate features, light on face, slight smile, Artgerm, Mary Blair, Edmund Dulac, long dark locks, bangs, glowing, fashionable style, fairytale ambience, hot pink.", "Masterpiece, vanilla cone ice cream garnished with chocolate syrup, crushed nuts, choco flakes, in a brown background, gold, cinematic lighting, Art by WLOP", "A bowl of milk, falling cornflakes, berries, blueberries, in a white background, soft lighting, intricate details, rule of thirds, octane render, volumetric lighting", "Cold Coffee with cream, crushed almonds, in a glass, choco flakes, ice cubes, wet, in a wooden background, cinematic lighting, hyper realistic painting, art by Carne Griffiths, octane render, volumetric lighting, fluid motion, dynamic movement, muted colors,", ] pipe = StableDiffusionPipeline.from_pretrained( "Lykon/dreamshaper-8", torch_dtype=torch.float16, vae=None, ).to("cuda") pipe.unet = pipe.unet.to(memory_format=torch.channels_last) pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True) _ = pipe( prompt=prompts[0], output_type="latent", ) for prompt in prompts: latent = pipe( prompt=prompt, output_type="latent", ).images decode(latent) q.put(None) thread.join()

Integrations

<>Update on GitHub

close