Skip to main content
This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHub
The Hunyuan3Dv2ConditioningMultiView node processes multi-view CLIP vision embeddings for 3D video generation. It takes optional front, left, back, and right view embeddings and combines them with positional encoding to create conditioning data for video models. The node outputs both positive conditioning from the combined embeddings and negative conditioning with zero values.

Inputs

ParameterData TypeRequiredRangeDescription
frontCLIP_VISION_OUTPUTNo-CLIP vision output for the front view
leftCLIP_VISION_OUTPUTNo-CLIP vision output for the left view
backCLIP_VISION_OUTPUTNo-CLIP vision output for the back view
rightCLIP_VISION_OUTPUTNo-CLIP vision output for the right view
Note: At least one view input must be provided for the node to function. The node will only process views that contain valid CLIP vision output data.

Outputs

Output NameData TypeDescription
positiveCONDITIONINGPositive conditioning containing the combined multi-view embeddings with positional encoding
negativeCONDITIONINGNegative conditioning with zero values for contrastive learning