Skip to content

Conversation (Lookup) v1.0.0 Help

Enables context-aware conversations by augmenting queries with relevant information from the specified Lookup collection

How can I use the Step?

You can use the Step for interactive dialogues, automated customer support, and information retrieval from structured document collections. It's particularly adept in handling conversations that require context awareness and precision.

How does the Step work?

The Step queries the specified Lookup collection to retrieve information relevant to the question/context and generate contextually rich responses.

Prerequisites

  • A Lookup collection for querying.
  • For custom authentication, ensure proper cross-account settings with Super Admin permission level for account and Flow and the necessary account ID or authentication token.

Input settings

  • Collection: Lookup collection to search and retrieve knowledge to answer the question. Required.
  • Question: question for the conversation. Required.
  • History mode: determines how to incorporate conversation history into the current session: Mergefield for direct use of the history from a Merge field, or Step Chooser to retrieve history from a linked Chat History (Lookup) Step.
  • History: conversation history for additional context. Optional, integrates with Chat History (Lookup) Step.
  • Properties: properties to include in the search results. Optional.
  • Filtering: Weaviate filter object for advanced result filtering. Optional.

Advanced settings

Most of the settings in this section reflect the request body of the Create chat completion endpoint, which you must use as your primary reference:

  • Model: Defines a model to generate an answer. Defaults to OpenAI's gpt-3.5-turbo-16k.
  • Temperature: creativity level for responses. Defaults to Balanced.
  • Maximum distance: defines how closely information retrieved from the Lookup collection matches your question. It must be in a range from 0 to 1, where 0 means only extremely similar passages are accepted, while 1 allows for unrelated passages. Defaults to 0.3 for relevant and on-topic answers.
  • Max tokens: maximum token length for responses.
  • Frequency penalty: encourages the model to generate unique responses. Defaults to 0.
  • Presence penalty: encourages the model to stay on-topic. Defaults to 0.
  • Limit: number of search results. Defaults to 5.
  • Use custom prompt templates: option to modify answer/question instruction templates for context retrieval and answer generation. Optional.

Cross-account settings

To access the Lookup service of another Onereach account, take these steps:

  1. Enable Use custom authentication token.
  2. Choose one of the following access types:
    • Authentication token
    • Account ID
  3. Enter the token or account ID (depending on the chosen access type).

Merge field settings

The Step outputs results as a JSON object, stored under the Merge field name. Learn more at Merge fields guide.

Output example

The output JSON object contains the answer, matched document details, and debug information. It includes the following properties:

  • result: [object] an object containing the role and content of the generated answer.
  • searchResult: [object] an object containing information about the matched document:
    • id: [string] the id of the matched document.
    • distance: [number] the distance between the matched document and the question/context.
    • content: [string] the content of the matched document.
    • sourceUrl: [string] the source URL of the matched document.
    • loaderMetadata: [object] additional metadata about the matched document.
    • document: [object] an object containing information about the document:
      • id: [string] the id of the document.
      • name: [string] the name of the document.
  • debugInfo: [object] additional debug information about the conversation:
    • generatedQuestion: [string] the generated question based on the conversation context.
    • questionGenerationPrompt: [string] the prompt used for generating the question.
    • questionGenerationTime: [number] the time taken to generate the question.
    • searchTime: [number] the time taken to perform the search.
    • answerGenerationPrompt: [string] the prompt used for generating the answer.
    • answerGenerationTime: [number] the time taken to generate the answer.

Response format:

json
{
  "result": {
    "role": "string",
    "content": "string"
  },
  "searchResult": {
    "id": "string",
    "distance": 0,
    "content": "string",
    "sourceUrl": "string",
    "loaderMetadata": {},
    "document": {
      "id": "string",
      "name": "string"
    }
  },
  "debugInfo": {
    "generatedQuestion": "string",
    "questionGenerationPrompt": "string",
    "questionGenerationTime": 0,
    "searchTime": 0,
    "answerGenerationPrompt": "string",
    "answerGenerationTime": 0
  }
}
{
  "result": {
    "role": "string",
    "content": "string"
  },
  "searchResult": {
    "id": "string",
    "distance": 0,
    "content": "string",
    "sourceUrl": "string",
    "loaderMetadata": {},
    "document": {
      "id": "string",
      "name": "string"
    }
  },
  "debugInfo": {
    "generatedQuestion": "string",
    "questionGenerationPrompt": "string",
    "questionGenerationTime": 0,
    "searchTime": 0,
    "answerGenerationPrompt": "string",
    "answerGenerationTime": 0
  }
}

Error handling

By default, the Step uses a separate exit for error handling. If an error occurs, the Flow continues down the error exit. For more info, see Error and timeout handling.

Reporting

The Step automatically generates Reporting events during its execution, allowing for real-time tracking and analysis of its performance and user interactions. To learn more, see Reporting events

Service dependencies

  • flow builder - v2.28.3
  • event-manager - v2.3.0
  • deployer - v2.6.0
  • library v2.11.3
  • studio v2.64.1

Release notes

v1.0.0

  • Initial release