Skip to main content
This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHub
The HunyuanVideo15SuperResolution node prepares conditioning data for a video super-resolution process. It takes a latent representation of a video and, optionally, a starting image, and packages them along with noise augmentation and CLIP vision data into a format that can be used by a model to generate a higher-resolution output.

Inputs

ParameterData TypeRequiredRangeDescription
positiveCONDITIONINGYesN/AThe positive conditioning input to be modified with latent and augmentation data.
negativeCONDITIONINGYesN/AThe negative conditioning input to be modified with latent and augmentation data.
vaeVAENoN/AThe VAE used to encode the optional start_image. Required if start_image is provided.
start_imageIMAGENoN/AAn optional starting image to guide the super-resolution. If provided, it will be upscaled and encoded into the conditioning latent.
clip_vision_outputCLIP_VISION_OUTPUTNoN/AOptional CLIP vision embeddings to add to the conditioning.
latentLATENTYesN/AThe input latent video representation that will be incorporated into the conditioning.
noise_augmentationFLOATNo0.0 - 1.0The strength of noise augmentation to apply to the conditioning (default: 0.70).
Note: If you provide a start_image, you must also connect a vae for it to be encoded. The start_image will be automatically upscaled to match the dimensions implied by the input latent.

Outputs

Output NameData TypeDescription
positiveCONDITIONINGThe modified positive conditioning, now containing the concatenated latent, noise augmentation, and optional CLIP vision data.
negativeCONDITIONINGThe modified negative conditioning, now containing the concatenated latent, noise augmentation, and optional CLIP vision data.
latentLATENTThe input latent is passed through unchanged.