Skip to main content
This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHub
The CLIPAttentionMultiply node allows you to adjust the attention mechanism in CLIP models by applying multiplication factors to different components of the self-attention layers. It works by modifying the query, key, value, and output projection weights and biases in the CLIP model’s attention mechanism. This experimental node creates a modified copy of the input CLIP model with the specified scaling factors applied.

Inputs

ParameterData TypeInput TypeDefaultRangeDescription
clipCLIPrequired--The CLIP model to modify
qFLOATrequired1.00.0 - 10.0Multiplication factor for query projection weights and biases
kFLOATrequired1.00.0 - 10.0Multiplication factor for key projection weights and biases
vFLOATrequired1.00.0 - 10.0Multiplication factor for value projection weights and biases
outFLOATrequired1.00.0 - 10.0Multiplication factor for output projection weights and biases

Outputs

Output NameData TypeDescription
CLIPCLIPReturns a modified CLIP model with the specified attention scaling factors applied