Generate text based on the given text input.
Documentation Index
Fetch the complete documentation index at: https://domoinc-arun-raj-connectors-domo-480814-upadate-new-checkbo.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Text Generation AI Service request.
Text Generation AI Service Request.
A prompt template is a string that contains placeholders for parameters that will be replaced with parameter values before the prompt is submitted to the model.
A default prompt template is set for each model configured for the Text Generation AI Service. Individual requests can override the
default template by including the promptTemplate parameter.
The following request parameters are automatically injected into the prompt template if the associated placeholder is present:
Models with built-in support for system prompts and chat message history do not need to include system or chatContext in the prompt template.
Additional parameters can be provided in the parameters map as key-value pairs.
The input text.
The AI session ID. If provided, this request will be associated with the specified AI Session.
The prompt template to use for the Text Generation task. The default prompt template will be used if not provided.
Custom parameters to inject into the prompt template if an associated placeholder is present.
The ID of the model to use for Text Generation. The specified model must be configured for the Text Generation AI Service by an Admin.
Additional model-specific configuration parameter key-value pairs. e.g. temperature, max_tokens, etc.
The system message to use for the Text Generation task. If not provided, the default system message will be used. If the model does not include built-in support for system prompts, this parameter may be included in the prompt template using the "${system}" placeholder.
Controls randomness in the model's output. Lower values make output more deterministic.
The maximum number of tokens to generate in the response.
Model response format specification for structured outputs.
Configuration for reasoning behavior and effort level.
TextAIResponse The generated text and model token usage information.
Response from a text AI Service.
The formatted prompt that was used to generate the response.
The list of choices generated by the model.
The id of the model used to generate the response.
The id of the AI Session associated with this request.
The output of the model.
The token usage from the model provider.