Skip to main content
This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHub
The ZImageFunControlnet node applies a specialized control network to influence the image generation or editing process. It uses a base model, a model patch, and a VAE, allowing you to adjust the strength of the control effect. This node can work with a base image, an inpainting image, and a mask for more targeted edits.

Inputs

ParameterData TypeRequiredRangeDescription
modelMODELYes-The base model used for the generation process.
model_patchMODEL_PATCHYes-A specialized patch model that applies the control network’s guidance.
vaeVAEYes-The Variational Autoencoder used for encoding and decoding images.
strengthFLOATYes-10.0 to 10.0The strength of the control network’s influence. Positive values apply the effect, while negative values can invert it (default: 1.0).
imageIMAGENo-An optional base image to guide the generation process.
inpaint_imageIMAGENo-An optional image used specifically for inpainting areas defined by a mask.
maskMASKNo-An optional mask that defines which areas of an image should be edited or inpainted.
Note: The inpaint_image parameter is typically used in conjunction with a mask to specify the content for inpainting. The node’s behavior may change based on which optional inputs are provided (e.g., using image for guidance or using image, mask, and inpaint_image for inpainting).

Outputs

Output NameData TypeDescription
modelMODELThe model with the control network patch applied, ready for use in a sampling pipeline.
positiveCONDITIONINGThe positive conditioning, potentially modified by the control network inputs.
negativeCONDITIONINGThe negative conditioning, potentially modified by the control network inputs.