This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHubThe ZImageFunControlnet node applies a specialized control network to influence the image generation or editing process. It uses a base model, a model patch, and a VAE, allowing you to adjust the strength of the control effect. This node can work with a base image, an inpainting image, and a mask for more targeted edits.
Inputs
| Parameter | Data Type | Required | Range | Description |
|---|---|---|---|---|
model | MODEL | Yes | - | The base model used for the generation process. |
model_patch | MODEL_PATCH | Yes | - | A specialized patch model that applies the control network’s guidance. |
vae | VAE | Yes | - | The Variational Autoencoder used for encoding and decoding images. |
strength | FLOAT | Yes | -10.0 to 10.0 | The strength of the control network’s influence. Positive values apply the effect, while negative values can invert it (default: 1.0). |
image | IMAGE | No | - | An optional base image to guide the generation process. |
inpaint_image | IMAGE | No | - | An optional image used specifically for inpainting areas defined by a mask. |
mask | MASK | No | - | An optional mask that defines which areas of an image should be edited or inpainted. |
inpaint_image parameter is typically used in conjunction with a mask to specify the content for inpainting. The node’s behavior may change based on which optional inputs are provided (e.g., using image for guidance or using image, mask, and inpaint_image for inpainting).
Outputs
| Output Name | Data Type | Description |
|---|---|---|
model | MODEL | The model with the control network patch applied, ready for use in a sampling pipeline. |
positive | CONDITIONING | The positive conditioning, potentially modified by the control network inputs. |
negative | CONDITIONING | The negative conditioning, potentially modified by the control network inputs. |