Configuring an LLM connection

Configuring an LLM connection

LLM Connection is the configuration that enables a system to communicate with a Large Language Model (LLM), such as those developed by OpenAI, Azure OpenAI, AWS Bedrock, and so on. An LLM connection is used to:

  • Translate natural language into analytical queries.

  • Power chat-based or voice-based interfaces.

  • Enhance user experience through intelligent, language-based interaction with data.

An LLM connection allows:

  • Sending natural language inputs (e.g., user queries) to the LLM.

  • Receiving generated outputs like MDX/SQL queries, summaries, or explanations.

  • Enabling features such as conversational analytics, natural language querying, or smart recommendations.

To configure the GenAI LLM connection to use Kyvos Dialogs.

Configuring a connection

To configure the GenAI LLM connection, perform the following steps: 

  1. On the navigation pane, click Kyvos and Ecosystem > GenAI Configuration. The page displays information about the GenAI Configuration connection details.

    image-20241220-093329.png
  2. In the Connections pane, click the plus icon to add a new GenAI connection. The connection that you create will be listed in the Connection pane. You can edit the information when needed.

  3. Select the name of the GenAI provider from the Provide list. The system will use the provider to generate output.

  4. To create an LLM connection, enter details as:

OpenAI

Parameter/Field

Description

Parameter/Field

Description

Connection Name

A unique name that identifies your GenAI connections.

Provider

Select the OpenAI provider from the list. The system will this provider to generate output.

URL

The LLM Service URL of the provider-specific endpoint for generating output.

API EndPoint

Specify which endpoint to be used to generate AI-powered conversational responses.

Authentication Key

For OpenAI, only the Authentication Key filed is displayed.

Specify a unique key for authenticating and authorizing requests to the provider's endpoint.

NOTE: If the key is not specified, the last provided Authentication Key will be used. To change, enter an Authentication Key.

Model

The name of the GenAI LLM model to be used for generating the output.

Refer to the Certified LLM and Embedding Models.

Is Model Fine Tuned

Select one of the following:
Yes: Select this option to fine-tune the model.
No: Select this option if you do not want to fine-tune the model.

Embedding Connection

Select the GenAI embedding provider that the system will use to generate embeddings.

Usage

Select one of the following:

  • MDX Generations: Select this option to use the feature for MDX calculations or MDX queries.

  • Conversational Analytics: Select this option to use the feature for Conversational AI usage.

You can also set a default connection to be used for MDX calculations, MDX queries. To configure this, select the appropriate checkboxes as needed:

  • Default Connection for MDX Generation: Select this checkbox to set the default connection for MDX calculations and MDX queries.

  • Default Connection for Conversational Analytics: Select this checkbox to use the feature for Conversational AI usage.

Allow Sending Data for LLM

Select Yes or No to specify whether the generated questions should include values or not.

Generate Content

Select Title, Summary, or Key Insight to determine the content to be generated, such as the title, summary, and key insights. NOTE: For summary and key insights, the value for 'Allow Sending Data for NLG', should be set to 'Yes'.

Max Rows Summary

Enter the value to configure maximum values for row summary.

NOTE: The default value is 100.

Input Prompt Token Limit

Specify the maximum tokens allowed for a prompt in a single request for the current provider.

NOTE: The default value is 8000.

The minimum value is 0.

Output Prompt Token Limit

Specify the maximum number of tokens shared between the prompt and output, which varies by model. One token is approximately four characters for English text.

NOTE: The default value is 8000.

The minimum value is 0.

Max Retry Count

Specify maximum retry count attempted so that we get correct query.

NOTE: The default value is 0.

Summary Records Threshold

Specify similarity threshold for query autocorrection.

NOTE: The default value is 0.2

The minimum value is 2.

LLM Temperature

Specify the LLM temperature, which controls the level of randomness in the output. Lowering the temperature results in less random completions. The responses of the model become increasingly deterministic and repetitive as it approaches zero. It is recommended to adjust either the temperature or top-p, but not both simultaneously.

Copyright Kyvos, Inc. 2025. All rights reserved.