Skip to main content
This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHub
The TextEncodeQwenImageEdit node processes text prompts and optional images to generate conditioning data for image generation or editing. It uses a CLIP model to tokenize the input and can optionally encode reference images using a VAE to create reference latents. When an image is provided, it automatically resizes the image to maintain consistent processing dimensions.

Inputs

ParameterData TypeRequiredRangeDescription
clipCLIPYes-The CLIP model used for text and image tokenization
promptSTRINGYes-Text prompt for conditioning generation, supports multiline input and dynamic prompts
vaeVAENo-Optional VAE model for encoding reference images into latents
imageIMAGENo-Optional input image for reference or editing purposes
Note: When both image and vae are provided, the node encodes the image into reference latents and attaches them to the conditioning output. The image is automatically resized to maintain a consistent processing scale of approximately 1024x1024 pixels.

Outputs

Output NameData TypeDescription
CONDITIONINGCONDITIONINGConditioning data containing text tokens and optional reference latents for image generation