This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHubThe CLIPTextEncodeKandinsky5 node prepares text prompts for use with the Kandinsky 5 model. It takes two separate text inputs, tokenizes them using a provided CLIP model, and combines them into a single conditioning output. This output is used to guide the image generation process.
Inputs
| Parameter | Data Type | Required | Range | Description |
|---|---|---|---|---|
clip | CLIP | Yes | The CLIP model used to tokenize and encode the text prompts. | |
clip_l | STRING | Yes | The primary text prompt. This input supports multiline text and dynamic prompts. | |
qwen25_7b | STRING | Yes | A secondary text prompt. This input supports multiline text and dynamic prompts. |
Outputs
| Output Name | Data Type | Description |
|---|---|---|
CONDITIONING | CONDITIONING | The combined conditioning data generated from both text prompts, ready to be fed into a Kandinsky 5 model for image generation. |