Create Chat Completion (Azure OpenAI) v1.0.0
Creates a completion for the chat message using the OpenAI Chat API via Azure OpenAI service.
How can I use the Step?
The Step lets you use GPT-3.5 Turbo and GPT-4 to perform natural language processing tasks, such as drafting written content, coding, answering questions, creating chatbots, and more. You can use the Step to build conversational agents for customer service, automate content creation for social media, create language-learning apps, and enhance virtual assistants.
How does the Step work?
You authorize the Step, select the model and provide it with messages and parameters. Taking these inputs, the Step requests OpenAI Chat API to generate the completion message as output.
Read the Chat Completions guide to learn more.
Prerequisites
You must have an Azure OpenAI resource to get started. To create one, follow the Azure documentation.
Authorization
To authorize the Step, you have two options:
- Inherit from previous Step (default): Use the same authorization as the previous Step in the Flow.
- Select authorization in the current Step: Choose an existing authorization or create a new one.
In case you need to create a new authorization, follow these steps:
- Choose Select authorization in the current Step and then select
Create a new authorization
from the list. - In the Add authorization modal window, provide the required details:
- Authorization name: Name your new authorization.
- API Key: Enter the API key of the Azure OpenAI resource.
- Endpoint: Enter the Azure OpenAI resource endpoint.
- Click Add to confirm settings and add your new authorization.
Note: You can find endpoint and API keys in Azure OpenAI resource under Resource Management > Keys and Endpoints.
Request settings
The settings in this section reflect the request body of the Create chat completion endpoint, which you must use as your primary reference:
- Model: Model to use.
- Messages: An array of message objects to generate chat completions. Each object must have a "role" ("system," "user," or "assistant") and "content" properties. Begin with a system message, then alternate user and assistant messages.
- Temperature: Controls the creativity of the generated response.
- Max tokens: The maximum number of tokens in the generated response.
- Frequency penalty: Encourages the model to generate unique responses.
- Presence penalty: Encourages the model to stay on-topic.
- Stop: A sequence where the model should stop generating tokens.
- Top P: Controls the diversity of the generated response.
In addition, the Step lets you test and save request parameters without activating the Flow. To do so, follow the next steps:
- Click Test API Request.
- In the popup, alter request parameters.
- Form the message array during the conversation:
- Submit a "system" message to set the assistant's behavior.
- Receive an "assistant" response and continue the conversation as a "user."
- Save your progress to populate all parameters in the Request settings section.
Advanced settings
In this section, you can specify additional request parameters for your task. All the parameters are optional and, except Result Example, refer to the Create chat completion endpoint:
- Stream: Toggle for streaming API response.
- N: Number of completions to generate.
- Result Example: Output structure to access a specific element of the Merge field from the other Steps of your Flow.
Merge field settings
The Step returns the result as a JSON object and stores it in the Merge field variable. Thus you can access the output JSON object from any point of your Flow.
Output example
The output contains information about the generated chat completion and has the following structure:
{
"id": "chatcmpl-72QmATd4drXsnVzT6gd5e60jsGcYU",
"object": "chat.completion",
"created": 1680813938,
"model": "gpt-3.5-turbo-0301",
"usage": {
"prompt_tokens": 57,
"completion_tokens": 16,
"total_tokens": 73
},
"choices": [
{
"message": {
"role": "assistant",
"content": "The 2020 World Series was played at Globe Life Field in Arlington, Texas"
},
"finish_reason": "length",
"index": 0
}
]
}
{
"id": "chatcmpl-72QmATd4drXsnVzT6gd5e60jsGcYU",
"object": "chat.completion",
"created": 1680813938,
"model": "gpt-3.5-turbo-0301",
"usage": {
"prompt_tokens": 57,
"completion_tokens": 16,
"total_tokens": 73
},
"choices": [
{
"message": {
"role": "assistant",
"content": "The 2020 World Series was played at Globe Life Field in Arlington, Texas"
},
"finish_reason": "length",
"index": 0
}
]
}
To learn more, review the Respons format.
Error Handling
By default, the Handle error toggle is on, and the Step handles errors with a separate exit. If any error occurs during the Step execution, the Flow proceeds down the error
exit.
If the Handle error toggle is disabled, the Step does not handle errors. In this case, if any error occurs during the Step execution, the Flow fails immediately after exceeding the Flow timeout. To prevent the Flow from being suspended and continue handling errors, you can place the Flow Error Handling Step before the main logic or your Flow.
Reporting
After the Step completes, it generates a report that includes its execution status and other details. You can customize the report by adjusting the Step's log level and adding tags.
Log level
By default, the Step's log level matches that of the Flow. You can change the Step's log level by selecting an appropriate option from the Log level dropdown.
Tags
Tags provide a way to classify and search for sessions based on their attributes. To create a new tag, specify its category, label, and value. You can then use tags to filter and group the sessions in the report.
Service dependencies
- flow builder - v2.28.3
- event-manager - v2.3.0
- deployer - v2.6.0
- library v2.11.3
- studio v2.64.1
Release notes
v1.0.0
- Initial release