This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHubLoads a batch of images and their corresponding text captions from a specified directory for training purposes. The node automatically searches for image files and their associated caption text files, processes the images according to specified resize settings, and encodes the captions using the provided CLIP model.
Inputs
| Parameter | Data Type | Required | Range | Description |
|---|---|---|---|---|
folder | STRING | Yes | - | The folder to load images from. |
clip | CLIP | Yes | - | The CLIP model used for encoding the text. |
resize_method | COMBO | No | ”None" "Stretch" "Crop" "Pad” | The method used to resize images (default: “None”). |
width | INT | No | -1 to 10000 | The width to resize the images to. -1 means use the original width (default: -1). |
height | INT | No | -1 to 10000 | The height to resize the images to. -1 means use the original height (default: -1). |
Outputs
| Output Name | Data Type | Description |
|---|---|---|
IMAGE | IMAGE | The batch of loaded and processed images. |
CONDITIONING | CONDITIONING | The encoded conditioning data from the text captions. |