Skip to main content
This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHub
The CLIPTextEncodeKandinsky5 node prepares text prompts for use with the Kandinsky 5 model. It takes two separate text inputs, tokenizes them using a provided CLIP model, and combines them into a single conditioning output. This output is used to guide the image generation process.

Inputs

ParameterData TypeRequiredRangeDescription
clipCLIPYesThe CLIP model used to tokenize and encode the text prompts.
clip_lSTRINGYesThe primary text prompt. This input supports multiline text and dynamic prompts.
qwen25_7bSTRINGYesA secondary text prompt. This input supports multiline text and dynamic prompts.

Outputs

Output NameData TypeDescription
CONDITIONINGCONDITIONINGThe combined conditioning data generated from both text prompts, ready to be fed into a Kandinsky 5 model for image generation.