This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHubThis node loads a specialized text encoder for the LTXV audio model. It combines a specific text encoder file with a checkpoint file to create a CLIP model that can be used for audio-related text conditioning tasks.
Inputs
| Parameter | Data Type | Required | Range | Description |
|---|---|---|---|---|
text_encoder | STRING | Yes | Multiple options available | The filename of the LTXV text encoder model to load. The available options are loaded from the text_encoders folder. |
ckpt_name | STRING | Yes | Multiple options available | The filename of the checkpoint to load. The available options are loaded from the checkpoints folder. |
device | STRING | No | "default""cpu" | Specifies the device to load the model onto. Use "cpu" to force loading onto the CPU. The default behavior ("default") uses the system’s automatic device placement. |
text_encoder and ckpt_name parameters work together. The node loads both specified files to create a single, functional CLIP model. The files must be compatible with the LTXV architecture.
Outputs
| Output Name | Data Type | Description |
|---|---|---|
clip | CLIP | The loaded LTXV CLIP model, ready to be used for encoding text prompts for audio generation. |