Create Text or Code Completion (Azure OpenAI) v1.0.0
Creates text or code completion for a provided prompt and parameters using Azure OpenAI service.
How can I use the Step?
Powered by OpenAI models, the Step can handle various tasks related to understanding or generating natural language or code. You can classify or summarize text, correct grammar, answer questions, transfer text to code, and more. Check out OpenAI examples and try corresponding presets to get better insights.
How does the Step work?
You authorize the Step, select the model, and provide the prompt and parameters. Taking these inputs, the Step requests OpenAI Completions API to generate the completion that matches the context or pattern in the prompt.
For example, if you choose one of the models and give it the prompt, "As Descartes said, I think, therefore," the Step returns "I am" with a high probability.
To learn more, read the Completions guide.
Prerequisites
You must have an Azure OpenAI resource to get started. To create one, follow the Azure documentation.
Authorization
To authorize the Step, you have two options:
- Inherit from previous Step (default): Use the same authorization as the previous Step in the Flow.
- Select authorization in the current Step: Choose an existing authorization or create a new one.
In case you need to create a new authorization, follow these steps:
- Choose Select authorization in the current Step and then select
Create a new authorization
from the list. - In the Add authorization modal window, provide the required details:
- Authorization name: Name your new authorization.
- API Key: Enter the API key of the Azure OpenAI resource.
- Endpoint: Enter the Azure OpenAI resource endpoint.
- Click Add to confirm settings and add your new authorization.
Note: You can find endpoint and API keys in Azure OpenAI resource under Resource Management > Keys and Endpoints.
Request settings
Most of the settings in this section reflect the request body of the Completions API, which you must use as your primary reference:
- Preset: Task-specific preset. Chosen preset applies the model, its prompt(s), and its parameters. To learn how to create presets, see test API request.
- Model: Model to use. For more on specific models, their capabilities, and limitations, refer to the Models overview.
- Prompt: Prompt to generate completions. It can be a string, array of strings, array of tokens, or array of token arrays to generate completions. Ensure to read the Prompt design guide.
- Temperature: Controls the randomness of the generated completion.
- Max tokens: Defines the maximum number of tokens in the generated completion.
- Frequency penalty: Encourages the model to produce unique completions.
- Presence penalty: Encourages the model to maintain the topic.
- Stop: A sequence where the model should cease generating tokens.
- Top P: Controls the diversity of the generated completion.
- Post-processing function: Text field to enter the JavaScript code to process and return the result in the desired form.
Test API request
Additionally, the Step allows you to test API requests and process responses without activating the Flow. To achieve this:
- Click Test API Request.
- In the modal window, modify model parameters and call the Completions API.
- Optional: Create your own preset using the following steps:
- Specify parameters suited for your task.
- Call the API.
- Name your preset.
- Save the preset.
- Save the Flow.
You can now find your newly created preset under the Preset > My presets list.
Advanced settings
In the Result example field, you can specify the output structure to access specific items of the Merge field from the other Steps of your Flow.
Merge field settings
The Step returns the result as a JSON object and stores it in the Merge field variable. Thus you can access the output JSON object from any point of your Flow.
Output example
The output contains information about the generated completion and has the following structure:
{
"id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
"object": "text_completion",
"created": 1589478378,
"model": "text-davinci-003",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}
{
"id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
"object": "text_completion",
"created": 1589478378,
"model": "text-davinci-003",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}
Error Handling
By default, the Handle error toggle is on, and the Step handles errors with a separate exit. If any error occurs during the Step execution, the Flow proceeds down the error
exit.
If the Handle error toggle is disabled, the Step does not handle errors. In this case, if any error occurs during the Step execution, the Flow fails immediately after exceeding the Flow timeout. To prevent the Flow from being suspended and continue handling errors, you can place the Flow Error Handling Step before the main logic or your Flow.
Reporting
The Step reports once after its execution. You can change the Step log level and add new tags in the section.
Log level
By default, the Step inherits its log level from Flow's log level. You can change the Step's log level by selecting an appropriate option from the Log level list.
Tags
Tags help organize and filter session information when generating reports. When adding a new tag, you can specify the tag category, label, and value.
Service dependencies
- flow builder - v2.28.3
- event-manager - v2.3.0
- deployer - v2.6.0
- library v2.11.3
- studio v2.64.1
Release notes
v1.0.0
- Initial release