Diffusers documentation
FluxControlNetModel
FluxControlNetModel
FluxControlNetModel is an implementation of ControlNet for Flux.1.
The ControlNet model was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, Maneesh Agrawala. It provides a greater degree of control over text-to-image generation by conditioning the model on additional inputs such as edge maps, depth maps, segmentation maps, and keypoints for pose detection.
The abstract from the paper is:
We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.
Loading from the original format
By default the FluxControlNetModel should be loaded with from_pretrained().
from diffusers import FluxControlNetPipeline from diffusers.models import FluxControlNetModel, FluxMultiControlNetModel controlnet = FluxControlNetModel.from_pretrained("InstantX/FLUX.1-dev-Controlnet-Canny") pipe = FluxControlNetPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", controlnet=controlnet) controlnet = FluxControlNetModel.from_pretrained("InstantX/FLUX.1-dev-Controlnet-Canny") controlnet = FluxMultiControlNetModel([controlnet]) pipe = FluxControlNetPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", controlnet=controlnet)
FluxControlNetModel
classdiffusers.FluxControlNetModel
<source>(patch_size: int = 1in_channels: int = 64num_layers: int = 19num_single_layers: int = 38attention_head_dim: int = 128num_attention_heads: int = 24joint_attention_dim: int = 4096pooled_projection_dim: int = 768guidance_embeds: bool = Falseaxes_dims_rope: typing.List[int] = [16, 56, 56]num_mode: int = Noneconditioning_embedding_channels: int = None)
forward
<source>(hidden_states: Tensorcontrolnet_cond: Tensorcontrolnet_mode: Tensor = Noneconditioning_scale: float = 1.0encoder_hidden_states: Tensor = Nonepooled_projections: Tensor = Nonetimestep: LongTensor = Noneimg_ids: Tensor = Nonetxt_ids: Tensor = Noneguidance: Tensor = Nonejoint_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = Nonereturn_dict: bool = True)
Parameters
- hidden_states (
torch.FloatTensor
of shape(batch size, channel, height, width)
) — Inputhidden_states
. - controlnet_cond (
torch.Tensor
) — The conditional input tensor of shape(batch_size, sequence_length, hidden_size)
. - controlnet_mode (
torch.Tensor
) — The mode tensor of shape(batch_size, 1)
. - conditioning_scale (
float
, defaults to1.0
) — The scale factor for ControlNet outputs. - encoder_hidden_states (
torch.FloatTensor
of shape(batch size, sequence_len, embed_dims)
) — Conditional embeddings (embeddings computed from the input conditions such as prompts) to use. - pooled_projections (
torch.FloatTensor
of shape(batch_size, projection_dim)
) — Embeddings projected from the embeddings of input conditions. - timestep (
torch.LongTensor
) — Used to indicate denoising step. - block_controlnet_hidden_states — (
list
oftorch.Tensor
): A list of tensors that if specified are added to the residuals of transformer blocks. - joint_attention_kwargs (
dict
, optional) — A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined underself.processor
in diffusers.models.attention_processor. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a~models.transformer_2d.Transformer2DModelOutput
instead of a plain tuple.
The FluxTransformer2DModel forward method.
set_attn_processor
<source>(processor)
Parameters
- processor (
dict
ofAttentionProcessor
or onlyAttentionProcessor
) — The instantiated processor class or a dictionary of processor classes that will be set as the processor for allAttention
layers.If
processor
is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors.
Sets the attention processor to use to compute attention.
FluxControlNetOutput
classdiffusers.models.controlnet_flux.FluxControlNetOutput
<source>(*args**kwargs)