Skip to main content
This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHub
The Wan Image to Video node generates video content starting from a single input image and a text prompt. It creates video sequences by extending the initial frame according to the provided description, with options to control video quality, duration, and audio integration.

Inputs

ParameterData TypeRequiredRangeDescription
modelCOMBOYes”wan2.5-i2v-preview"
"wan2.5-i2v-preview”
Model to use (default: “wan2.5-i2v-preview”)
imageIMAGEYes-Input image that serves as the first frame for video generation
promptSTRINGYes-Prompt used to describe the elements and visual features, supports English/Chinese (default: empty)
negative_promptSTRINGNo-Negative text prompt to guide what to avoid (default: empty)
resolutionCOMBONo”480P"
"720P"
"1080P”
Video resolution quality (default: “480P”)
durationINTNo5-10Available durations: 5 and 10 seconds (default: 5)
audioAUDIONo-Audio must contain a clear, loud voice, without extraneous noise, background music
seedINTNo0-2147483647Seed to use for generation (default: 0)
generate_audioBOOLEANNo-If there is no audio input, generate audio automatically (default: False)
prompt_extendBOOLEANNo-Whether to enhance the prompt with AI assistance (default: True)
watermarkBOOLEANNo-Whether to add an “AI generated” watermark to the result (default: True)
Constraints:
  • Exactly one input image is required for video generation
  • Duration parameter only accepts values of 5 or 10 seconds
  • When audio is provided, it must be between 3.0 and 29.0 seconds in duration

Outputs

Output NameData TypeDescription
outputVIDEOGenerated video based on the input image and prompt