Skip to main content
This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHub
The TrainLoraNode creates and trains a LoRA (Low-Rank Adaptation) model on a diffusion model using provided latents and conditioning data. It allows you to fine-tune a model with custom training parameters, optimizers, and loss functions. The node outputs the trained model with LoRA applied, the LoRA weights, training loss metrics, and the total training steps completed.

Inputs

ParameterData TypeRequiredRangeDescription
modelMODELYes-The model to train the LoRA on.
latentsLATENTYes-The Latents to use for training, serve as dataset/input of the model.
positiveCONDITIONINGYes-The positive conditioning to use for training.
batch_sizeINTYes1-10000The batch size to use for training (default: 1).
grad_accumulation_stepsINTYes1-1024The number of gradient accumulation steps to use for training (default: 1).
stepsINTYes1-100000The number of steps to train the LoRA for (default: 16).
learning_rateFLOATYes0.0000001-1.0The learning rate to use for training (default: 0.0005).
rankINTYes1-128The rank of the LoRA layers (default: 8).
optimizerCOMBOYes”AdamW"
"Adam"
"SGD"
"RMSprop”
The optimizer to use for training (default: “AdamW”).
loss_functionCOMBOYes”MSE"
"L1"
"Huber"
"SmoothL1”
The loss function to use for training (default: “MSE”).
seedINTYes0-18446744073709551615The seed to use for training (used in generator for LoRA weight initialization and noise sampling) (default: 0).
training_dtypeCOMBOYes”bf16"
"fp32”
The dtype to use for training (default: “bf16”).
lora_dtypeCOMBOYes”bf16"
"fp32”
The dtype to use for lora (default: “bf16”).
algorithmCOMBOYesMultiple options availableThe algorithm to use for training.
gradient_checkpointingBOOLEANYes-Use gradient checkpointing for training (default: True).
existing_loraCOMBOYesMultiple options availableThe existing LoRA to append to. Set to None for new LoRA (default: “[None]”).
Note: The number of positive conditioning inputs must match the number of latent images. If only one positive conditioning is provided with multiple images, it will be automatically repeated for all images.

Outputs

Output NameData TypeDescription
model_with_loraMODELThe original model with the trained LoRA applied.
loraLORA_MODELThe trained LoRA weights that can be saved or applied to other models.
lossLOSS_MAPA dictionary containing the training loss values over time.
stepsINTThe total number of training steps completed (including any previous steps from existing LoRA).