Skip to main content
This documentation was AI-generated. If you find any errors or have suggestions for improvement, please feel free to contribute! Edit on GitHub
The OpenAIChatConfig node allows setting additional configuration options for the OpenAI Chat Node. It provides advanced settings that control how the model generates responses, including truncation behavior, output length limits, and custom instructions.

Inputs

ParameterData TypeRequiredRangeDescription
truncationCOMBOYes"auto"
"disabled"
The truncation strategy to use for the model response. auto: If the context of this response and previous ones exceeds the model’s context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.disabled: If a model response will exceed the context window size for a model, the request will fail with a 400 error (default: “auto”)
max_output_tokensINTNo16-16384An upper bound for the number of tokens that can be generated for a response, including visible output tokens (default: 4096)
instructionsSTRINGNo-Additional instructions for the model response (multiline input supported)

Outputs

Output NameData TypeDescription
OPENAI_CHAT_CONFIGOPENAI_CHAT_CONFIGConfiguration object containing the specified settings for use with OpenAI Chat Nodes