Denodo Assistant¶
This page explains how to enable the Denodo Assistant features that use large language models. The page Denodo Assistant Features that Use Large Language Models lists these features.
To use these features, first you have to set up the connection to the API of the large language model (LLM) and then, activate this set of features.
LLM Configuration¶
In the LLM Configuration area of the Denodo Assistant section, you set up the large language model provider that the Denodo Assistant will use.
![Denodo Assistant LLM provider configuration](/docs/html/img/9.1/denodo_assistant_provider_configuration.png)
When setting up a provider, click Test Configuration to ensure the configuration is correct. This sends a test request to the AI service.
Denodo supports these LLM providers: Amazon Bedrock, Azure OpenAI Service, OpenAI and Custom (to connect to a custom API).
Amazon Bedrock¶
AWS access key ID. The identifier of your AWS access key pair.
AWS secret access key. The secret access key of your AWS access key pair.
AWS ARN IAM role. The AWS IAM role.
AWS region. The AWS region where you have access to the Amazon Bedrock service.
Context window (tokens). The maximum number of tokens allowed by the model selected.
Max Output Tokens: The maximum number of tokens that the model is allowed to generate in a single response.
Model id. Unique identifier of a model. At the moment, only Amazon Bedrock models from the Claude family of model provider Anthropic are supported.
For more information on the AWS authentication parameters go to the Amazon AWS security credentials reference.
Azure OpenAI¶
Azure resource name. The name of your Azure resource.
Azure deployment name. The deployment name you chose when you deployed the model.
API version. The API version to use for this operation. This follows the YYYY-MM-DD format.
API key. The API key.
Context window (tokens). The maximum number of tokens allowed by the model you deployed.
Max Output Tokens: The maximum number of tokens that the model is allowed to generate in a single response.
For more information on the Azure OpenAI Service API parameters go to the Azure OpenAI Service REST API reference.
OpenAI¶
API key. This is the OpenAI API key. This parameter is required.
Organization ID. If configured, a header to specify which organization is used for an API request will be sent. This is useful for users who belong to multiple organizations. This parameter is not required.
Model. The model which is going to be used to generate the query. The dropdown values are the ones tested by Denodo. However, if you want to try an untested OpenAI model, you can configure it by pressing the edit icon. Note that some models shown may not work depending on your organization’s OpenAI account.
Context window (tokens). The maximum number of tokens allowed by the model selected.
Max Output Tokens: The maximum number of tokens that the model is allowed to generate in a single response.
For more information on the OpenAI API parameters, go to the OpenAI API reference https://platform.openai.com/docs/api-reference/authentication.
Custom Compatibility Mode¶
The Custom compatibility mode parameter allows the Denodo Assistant to send and process requests depending on the custom API following the OpenAI chat completions API approach or the Azure OpenAI Service chat completions API approach:
OpenAI chat completions API approach. We define that a custom API follows this approach if it implements the official chat completions OpenAI API (see https://platform.openai.com/docs/guides/text-generation/chat-completions-api). In this case, the Custom compatibility mode parameter must be set to OpenAI. Not all the parameters of the chat completions OpenAI API are needed for the custom API to be compatible with the Denodo Assistant:
Request body. The Denodo Assistant will make requests with a request body having the following parameters: model, messages and temperature.
Response body. The Denodo Assistant needs the following parameters in the response body: id, object, created, choices and usage
Azure OpenAI Service chat completions API. We define that a custom API follows this approach if the custom API implements the official chat completions Azure OpenAI Service API (see https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#chat-completions). In this case, the Custom compatibility mode parameter must be set to Azure OpenAI Service. Not all the parameters of the chat completions Azure OpenAI Service API are needed for the custom API to be compatible with the Denodo Assistant:
Request body. The Denodo Assistant will make requests with a request body having the following parameters: messages and temperature.
Response body. The Denodo Assistant needs the following parameters in the response body: id, object, created, choices and usage
Now the parameters of each custom compatibility mode are explained:
Custom OpenAI¶
Authentication. Configure this option depending on whether the custom API requires authentication.
API key. The API key. If the authentication is turned on, this parameter is required.
Organization ID. If configured, a header to specify which organization is used for an API request will be sent. This is useful for users who belong to multiple organizations. Only available when the authentication is turned on. This parameter is not required.
OpenAI URI. The URI of your custom API. This parameter is required.
Model. The model which is going to be used to generate the query. This parameter is required.
Context window (tokens). The maximum number of tokens allowed by your custom model. This parameter is required.
Max Output Tokens: The maximum number of tokens that the model is allowed to generate in a single response. This parameter is required.
Custom Azure OpenAI Service¶
Authentication. Configure this option depending on whether the custom API requires authentication.
API key. The API key. If the authentication is turned on, this parameter is required.
Azure OpenAI service URI. The URI of your custom API. This parameter is required.
Context window (tokens). The maximum number of tokens allowed by your custom model. This parameter is required.
Max Output Tokens: The maximum number of tokens that the model is allowed to generate in a single response. This parameter is required.
HTTP Proxy Configuration¶
Denodo Assistant’s requests will be routed through a proxy by default if the vdp server proxy is activated in the section OTHERS > HTTP Proxy of Server Configuration. To disable this behavior, set the following property to false:
com.denodo.vdb.llm.integration.proxyEnabled
If the proxy configured requires Basic Authorization, add this property to the JVM properties of the Virtual DataPort server and the web container:
-Djdk.http.auth.tunneling.disabledSchemes=""
For more information about configuring the JVM properties see Denodo Platform Configuration section.
General¶
In the Denodo Assistant > General, you configure the Denodo assistant.
![Denodo Assistant configuration](/docs/html/img/9.1/denodo_assistant_configuration.png)
Denodo Assistant configuration parameters¶
Enable Denodo Assistant features that use a large language model. This enables or disables the Denodo Assistant’s features that use a large language model.
Note
Enabling the Assistant allows the server to send metadata information to the configured LLM when using it. Only if the data usage is enabled, data of the view is also going to be processed and sent to the LLM in order to improve the results. Make sure your organization is aware and agrees on this.
Enabling this feature will result in incurring expenses for requests made to an external LLM API. Make sure that the organization is fully cognizant of this financial implication prior to enabling this option.
Language Options. The language in which the Denodo Assistant will answer. You can choose either the server’s i18n language (see the section OTHERS > Server i18n of Server configuration) or an alternative language.
Use a different language for column name generation. When enabled, the Assistant will generate the column names in a different language from the one selected before. See the Suggest Field Names section for more information.
Use sample data. When enabled, the Assistant will send actual data of the views to the LLM (i.e. rows that the views return). This can help improve the Assistant’s responses. For this option to work, check that the Cache Module is enabled. If enabled, enter the Sample data size. This is the number of rows retrieved for sampling. The sampling process will create a sample selecting a subset of the result rows.
Note
Enabling the use sample data allows the Assistant to send actual data of the view(s) used. Make sure your organization is aware and agrees on this.