Skip to main content
This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHub
The ControlNetInpaintingAliMamaApply node applies ControlNet conditioning for inpainting tasks by combining positive and negative conditioning with a control image and mask. It processes the input image and mask to create modified conditioning that guides the generation process, allowing for precise control over which areas of the image are inpainted. The node supports strength adjustment and timing controls to fine-tune the ControlNet’s influence during different stages of the generation process.

Inputs

ParameterData TypeRequiredRangeDescription
positiveCONDITIONINGYes-The positive conditioning that guides the generation toward desired content
negativeCONDITIONINGYes-The negative conditioning that guides the generation away from unwanted content
control_netCONTROL_NETYes-The ControlNet model that provides additional control over the generation
vaeVAEYes-The VAE (Variational Autoencoder) used for encoding and decoding images
imageIMAGEYes-The input image that serves as control guidance for the ControlNet
maskMASKYes-The mask that defines which areas of the image should be inpainted
strengthFLOATYes0.0 to 10.0The strength of the ControlNet effect (default: 1.0)
start_percentFLOATYes0.0 to 1.0The starting point (as percentage) of when ControlNet influence begins during generation (default: 0.0)
end_percentFLOATYes0.0 to 1.0The ending point (as percentage) of when ControlNet influence stops during generation (default: 1.0)
Note: When the ControlNet has concat_mask enabled, the mask is inverted and applied to the image before processing, and the mask is included in the extra concatenation data sent to the ControlNet.

Outputs

Output NameData TypeDescription
positiveCONDITIONINGThe modified positive conditioning with ControlNet applied for inpainting
negativeCONDITIONINGThe modified negative conditioning with ControlNet applied for inpainting