This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHubThe ElevenLabs Text to Dialogue node generates a multi-speaker audio dialogue from text. It allows you to create a conversation by specifying different text lines and distinct voices for each participant. The node sends the dialogue request to the ElevenLabs API and returns the generated audio.
Inputs
| Parameter | Data Type | Required | Range | Description |
|---|---|---|---|---|
stability | FLOAT | No | 0.0 - 1.0 | Voice stability. Lower values give broader emotional range, higher values produce more consistent but potentially monotonous speech. (default: 0.5) |
apply_text_normalization | COMBO | No | "auto""on""off" | Text normalization mode. ‘auto’ lets the system decide, ‘on’ always applies normalization, ‘off’ skips it. |
model | COMBO | No | "eleven_v3" | Model to use for dialogue generation. |
inputs | DYNAMICCOMBO | Yes | "1""2""3""4""5""6""7""8""9""10" | Number of dialogue entries. Selecting a number will generate that many text and voice input fields. |
language_code | STRING | No | - | ISO-639-1 or ISO-639-3 language code (e.g., ‘en’, ‘es’, ‘fra’). Leave empty for automatic detection. (default: empty) |
seed | INT | No | 0 - 4294967295 | Seed for reproducibility. (default: 1) |
output_format | COMBO | No | "mp3_44100_192""opus_48000_192" | Audio output format. |
inputs parameter is dynamic. When you select a number (e.g., “3”), the node will display three corresponding text and voice input fields (e.g., text1, voice1, text2, voice2, text3, voice3). Each text field must contain at least one character.
Outputs
| Output Name | Data Type | Description |
|---|---|---|
audio | AUDIO | The generated multi-speaker dialogue audio in the selected output format. |