This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHubThe Hunyuan3Dv2ConditioningMultiView node processes multi-view CLIP vision embeddings for 3D video generation. It takes optional front, left, back, and right view embeddings and combines them with positional encoding to create conditioning data for video models. The node outputs both positive conditioning from the combined embeddings and negative conditioning with zero values.
Inputs
| Parameter | Data Type | Required | Range | Description |
|---|---|---|---|---|
front | CLIP_VISION_OUTPUT | No | - | CLIP vision output for the front view |
left | CLIP_VISION_OUTPUT | No | - | CLIP vision output for the left view |
back | CLIP_VISION_OUTPUT | No | - | CLIP vision output for the back view |
right | CLIP_VISION_OUTPUT | No | - | CLIP vision output for the right view |
Outputs
| Output Name | Data Type | Description |
|---|---|---|
positive | CONDITIONING | Positive conditioning containing the combined multi-view embeddings with positional encoding |
negative | CONDITIONING | Negative conditioning with zero values for contrastive learning |