Skip to content

Latest commit

 

History

History
279 lines (215 loc) · 11.1 KB

mochi.md

File metadata and controls

279 lines (215 loc) · 11.1 KB

Mochi 1 Preview

LoRA

Tip

Only a research preview of the model weights is available at the moment.

Mochi 1 is a video generation model by Genmo with a strong focus on prompt adherence and motion quality. The model features a 10B parameter Asmmetric Diffusion Transformer (AsymmDiT) architecture, and uses non-square QKV and output projection layers to reduce inference memory requirements. A single T5-XXL model is used to encode prompts.

Mochi 1 preview is an open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation. This model dramatically closes the gap between closed and open video generation systems. The model is released under a permissive Apache 2.0 license.

Tip

Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.

Quantization

Quantization helps reduce the memory requirements of very large models by storing model weights in a lower precision data type. However, quantization may have varying impact on video quality depending on the video model.

Refer to the Quantization overview to learn more about supported quantization backends and selecting a quantization backend that supports your use case. The example below demonstrates how to load a quantized [MochiPipeline] for inference with bitsandbytes.

importtorchfromdiffusersimportBitsAndBytesConfigasDiffusersBitsAndBytesConfig, MochiTransformer3DModel, MochiPipelinefromdiffusers.utilsimportexport_to_videofromtransformersimportBitsAndBytesConfigasBitsAndBytesConfig, T5EncoderModelquant_config=BitsAndBytesConfig(load_in_8bit=True) text_encoder_8bit=T5EncoderModel.from_pretrained( "genmo/mochi-1-preview", subfolder="text_encoder", quantization_config=quant_config, torch_dtype=torch.float16, ) quant_config=DiffusersBitsAndBytesConfig(load_in_8bit=True) transformer_8bit=MochiTransformer3DModel.from_pretrained( "genmo/mochi-1-preview", subfolder="transformer", quantization_config=quant_config, torch_dtype=torch.float16, ) pipeline=MochiPipeline.from_pretrained( "genmo/mochi-1-preview", text_encoder=text_encoder_8bit, transformer=transformer_8bit, torch_dtype=torch.float16, device_map="balanced", ) video=pipeline( "Close-up of a cats eye, with the galaxy reflected in the cats eye. Ultra high resolution 4k.", num_inference_steps=28, guidance_scale=3.5 ).frames[0] export_to_video(video, "cat.mp4")

Generating videos with Mochi-1 Preview

The following example will download the full precision mochi-1-preview weights and produce the highest quality results but will require at least 42GB VRAM to run.

importtorchfromdiffusersimportMochiPipelinefromdiffusers.utilsimportexport_to_videopipe=MochiPipeline.from_pretrained("genmo/mochi-1-preview") # Enable memory savingspipe.enable_model_cpu_offload() pipe.enable_vae_tiling() prompt="Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."withtorch.autocast("cuda", torch.bfloat16, cache_enabled=False): frames=pipe(prompt, num_frames=85).frames[0] export_to_video(frames, "mochi.mp4", fps=30)

Using a lower precision variant to save memory

The following example will use the bfloat16 variant of the model and requires 22GB VRAM to run. There is a slight drop in the quality of the generated video as a result.

importtorchfromdiffusersimportMochiPipelinefromdiffusers.utilsimportexport_to_videopipe=MochiPipeline.from_pretrained("genmo/mochi-1-preview", variant="bf16", torch_dtype=torch.bfloat16) # Enable memory savingspipe.enable_model_cpu_offload() pipe.enable_vae_tiling() prompt="Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."frames=pipe(prompt, num_frames=85).frames[0] export_to_video(frames, "mochi.mp4", fps=30)

Reproducing the results from the Genmo Mochi repo

The Genmo Mochi implementation uses different precision values for each stage in the inference process. The text encoder and VAE use torch.float32, while the DiT uses torch.bfloat16 with the attention kernel set to EFFICIENT_ATTENTION. Diffusers pipelines currently do not support setting different dtypes for different stages of the pipeline. In order to run inference in the same way as the original implementation, please refer to the following example.

The original Mochi implementation zeros out empty prompts. However, enabling this option and placing the entire pipeline under autocast can lead to numerical overflows with the T5 text encoder.

When enabling force_zeros_for_empty_prompt, it is recommended to run the text encoding step outside the autocast context in full precision.

Decoding the latents in full precision is very memory intensive. You will need at least 70GB VRAM to generate the 163 frames in this example. To reduce memory, either reduce the number of frames or run the decoding step in `torch.bfloat16`.
importtorchfromtorch.nn.attentionimportSDPBackend, sdpa_kernelfromdiffusersimportMochiPipelinefromdiffusers.utilsimportexport_to_videofromdiffusers.video_processorimportVideoProcessorpipe=MochiPipeline.from_pretrained("genmo/mochi-1-preview", force_zeros_for_empty_prompt=True) pipe.enable_vae_tiling() pipe.enable_model_cpu_offload() prompt="An aerial shot of a parade of elephants walking across the African savannah. The camera showcases the herd and the surrounding landscape."withtorch.no_grad(): prompt_embeds, prompt_attention_mask, negative_prompt_embeds, negative_prompt_attention_mask= ( pipe.encode_prompt(prompt=prompt) ) withtorch.autocast("cuda", torch.bfloat16): withsdpa_kernel(SDPBackend.EFFICIENT_ATTENTION): frames=pipe( prompt_embeds=prompt_embeds, prompt_attention_mask=prompt_attention_mask, negative_prompt_embeds=negative_prompt_embeds, negative_prompt_attention_mask=negative_prompt_attention_mask, guidance_scale=4.5, num_inference_steps=64, height=480, width=848, num_frames=163, generator=torch.Generator("cuda").manual_seed(0), output_type="latent", return_dict=False, )[0] video_processor=VideoProcessor(vae_scale_factor=8) has_latents_mean=hasattr(pipe.vae.config, "latents_mean") andpipe.vae.config.latents_meanisnotNonehas_latents_std=hasattr(pipe.vae.config, "latents_std") andpipe.vae.config.latents_stdisnotNoneifhas_latents_meanandhas_latents_std: latents_mean= ( torch.tensor(pipe.vae.config.latents_mean).view(1, 12, 1, 1, 1).to(frames.device, frames.dtype) ) latents_std= ( torch.tensor(pipe.vae.config.latents_std).view(1, 12, 1, 1, 1).to(frames.device, frames.dtype) ) frames=frames*latents_std/pipe.vae.config.scaling_factor+latents_meanelse: frames=frames/pipe.vae.config.scaling_factorwithtorch.no_grad(): video=pipe.vae.decode(frames.to(pipe.vae.dtype), return_dict=False)[0] video=video_processor.postprocess_video(video)[0] export_to_video(video, "mochi.mp4", fps=30)

Running inference with multiple GPUs

It is possible to split the large Mochi transformer across multiple GPUs using the device_map and max_memory options in from_pretrained. In the following example we split the model across two GPUs, each with 24GB of VRAM.

importtorchfromdiffusersimportMochiPipeline, MochiTransformer3DModelfromdiffusers.utilsimportexport_to_videomodel_id="genmo/mochi-1-preview"transformer=MochiTransformer3DModel.from_pretrained( model_id, subfolder="transformer", device_map="auto", max_memory={0: "24GB", 1: "24GB"} ) pipe=MochiPipeline.from_pretrained(model_id, transformer=transformer) pipe.enable_model_cpu_offload() pipe.enable_vae_tiling() withtorch.autocast(device_type="cuda", dtype=torch.bfloat16, cache_enabled=False): frames=pipe( prompt="Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k.", negative_prompt="", height=480, width=848, num_frames=85, num_inference_steps=50, guidance_scale=4.5, num_videos_per_prompt=1, generator=torch.Generator(device="cuda").manual_seed(0), max_sequence_length=256, output_type="pil", ).frames[0] export_to_video(frames, "output.mp4", fps=30)

Using single file loading with the Mochi Transformer

You can use from_single_file to load the Mochi transformer in its original format.

Diffusers currently doesn't support using the FP8 scaled versions of the Mochi single file checkpoints.
importtorchfromdiffusersimportMochiPipeline, MochiTransformer3DModelfromdiffusers.utilsimportexport_to_videomodel_id="genmo/mochi-1-preview"ckpt_path="https://huggingface.co/Comfy-Org/mochi_preview_repackaged/blob/main/split_files/diffusion_models/mochi_preview_bf16.safetensors"transformer=MochiTransformer3DModel.from_pretrained(ckpt_path, torch_dtype=torch.bfloat16) pipe=MochiPipeline.from_pretrained(model_id, transformer=transformer) pipe.enable_model_cpu_offload() pipe.enable_vae_tiling() withtorch.autocast(device_type="cuda", dtype=torch.bfloat16, cache_enabled=False): frames=pipe( prompt="Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k.", negative_prompt="", height=480, width=848, num_frames=85, num_inference_steps=50, guidance_scale=4.5, num_videos_per_prompt=1, generator=torch.Generator(device="cuda").manual_seed(0), max_sequence_length=256, output_type="pil", ).frames[0] export_to_video(frames, "output.mp4", fps=30)

MochiPipeline

[[autodoc]] MochiPipeline

  • all
  • call

MochiPipelineOutput

[[autodoc]] pipelines.mochi.pipeline_output.MochiPipelineOutput

close