Denodo AI SDK - User Manual

Download original document


You can translate the document:

Introduction

The Denodo AI SDK works with the Denodo Platform to help simplify and accelerate the development of AI chatbot and agents. The SDK simplifies the process of grounding AI agent using enterprise data using the Retrieval-Augmented Generation (RAG) pattern. These types of RAG applications combine the power of retrieval-based and generation-based models to provide more accurate and contextually relevant responses. The AI SDK simplifies the development process with multiple configurable LLMs and Vector Databases, enabling you to quickly create AI applications.

Benefits of Using Denodo’s SDK for RAG Applications

  • Rapid Development: Build RAG applications quickly and efficiently, reducing time-to-market.
  • Enhanced Accuracy: Combine retrieval and generation models to improve the accuracy and relevance of responses.
  • Flexibility: Easily integrate with various data sources and adapt to different use cases.
  • Improved User Engagement: Provide users with more precise and context-aware information, enhancing their overall experience.

To showcase the AI SDK’s capabilities, a sample chatbot application is included.

How to obtain the Denodo AI SDK

Downloading the source code from GitHub

The AI SDK and sample chatbot source code is available in the following public Github repository.

Downloading the Denodo AI SDK container image

If you want to use the Denodo AI SDK container image with a Denodo Platform installation, you can obtain the AI SDK container image from DenodoConnects via:

  • Support site. You will find it in the DenodoConnects section.
  • Harbor:
  • If you have an Enterprise Plus license you can find it here.
  • If you have an Express license you can find it here.

NOTE: If downloading the image directly from Harbor, you will need to also obtain the example configuration files from Github: sdk_config_example.env, chatbot_config_example.env. If you obtain the AI SDK container image from the support site, these files will be already included in the downloaded .zip file.

If you want to use the AI SDK container image together with the Denodo Express container image you can follow this tutorial.

Installing Denodo Express

The AI SDK is included as part of the Denodo Express installer.

Installing the environment

NOTE: This is only necessary if you obtained the AI SDK from Github or as part of the Denodo Express installer. The container from DenodoConnects has the Python environment already set up.

To execute the AI SDK, you will need the following environment setup:

  • Python 3.10/3.11/3.12
  • A Denodo 9.0.5 or higher (either Express or Enterprise Plus license is required) instance, with cache enabled.

Once you have installed the prerequisites, you will proceed by creating the Python virtual environment and installing the required dependencies. The steps are the same for both Windows and Linux, unless otherwise specified.

First, we have to move to the AI SDK root directory:

cd [path to the Denodo AI SDK]

Then, we create the Python virtual environment.

python -m venv venv_denodo

This will create a venv_denodo folder inside our AI SDK folder where all the specific dependencies for this project will be installed. We now need to activate the virtual environment.

Activating the virtual environment differs depending on the OS.

Windows

.\venv_denodo\Scripts\activate

Linux

source venv_denodo/bin/activate

Once inside the virtual environment, we can install the required dependencies. For Windows OS, please use requirements_windows.txt and likewise for Linux.

python -m pip install -r requirements_{windows|linux}.txt

Basic configuration

Virtual DataPort and Data Catalog

The AI SDK must be able to connect to the Data Catalog to retrieve the data and execute queries against it. For this reason, the Virtual DataPort server and the Data Catalog of your Denodo instance must always be running while executing the AI SDK.

NOTE: With the AI SDK there is a sample banking demo data to get started with the AI SDK. If you plan on using it, you can skip the following Virtual DataPort and Data Catalog configuration steps and jump to the AI SDK configuration section.

Enable cache in Virtual DataPort

Cache is needed to retrieve data samples of each view. You can disable this (and remove the need of having cache enabled) by setting examples_per_table = 0 in the getMetadata endpoint.

Choose the databases you want to expose to the AI SDK

In the AI SDK configuration file you will be able to select which databases are exposed to the AI SDK through the VDB_NAMES variable (more on the AI SDK’s configuration later on).

By default, VDB_NAMES points to ‘samples_bank’, which is the name of the demo database that comes with the AI SDK. You will learn how to load this demo data later on in this manual.

NOTE: Remember to synchronize the Data Catalog with VDP before executing the AI SDK.

Configure users with the correct permissions

If you plan on using your own data, we recommend you create two users with specific permissions to work with:

  • Metadata Sync User. This is the user account used to run the synchronization to vectorize the Denodo Metadata so it is available for the LLM. This user should have the privileges to access all the assets that will be available via the Denodo AI SDK. This will be typically a service account used for this particular purpose. See: Create a new User for the  Creating Users — Virtual DataPort Administration Guide.

  • End User(s). These are the users that will be interacting with the AI SDK to run their questions over the data and metadata exposed in Denodo.

The same access privileges and data protections applied to the users in Denodo will apply when the End users interact via the AI SDK.

How to configure users in the AI SDK

  • Metadata Sync User. You will need to configure this user through the AI SDK advanced configuration variables either DATA_CATALOG_METADATA_USER/PWD for HTTP Basic authentication or through the DATA_CATALOG_METADATA_OAUTH for Bearer token authentication.
  • End User(s). This is passed directly through the authentication method of your choice (HTTP Basic or OAuth) when calling the endpoint. The credentials sent will be passed to the Denodo instance to authenticate.

How to grant privileges for users in Denodo

  1. To set user privileges in Denodo Design Studio go to: Administration > User Management
  2. Select Edit Privileges
Metadata Sync User privileges

The vectorization process requires privileges to obtain both metadata and some sample data from Denodo. Therefore the privileges required for this user are:

  • Connection privilege in the admin database within Denodo.
  • Create view privilege in the admin database within Denodo.
  • Connect privilege in your target database within Denodo.
  • Metadata privilege in your target database within Denodo.
  • Execute privilege in your target database within Denodo.

If you do not want to include sample data in the vectorization process (which is highly recommended because it is shared with the LLM to better understand your schema), please check this guide.

End User privileges

For a user to execute and interact with the AI SDK, it must have:

  • Connection privilege in the admin database within Denodo.
  • Connect privilege in your target database within Denodo.
  • Metadata privilege in your target database within Denodo.
  • Execute privilege in your target database within Denodo.

AI SDK

The AI SDK is configured via a .env file located in api/utils/sdk_config.env. With this file you will be able to configure many parameters related to the AI SDK’s behavior.

NOTE: If using the container images, you will find the sdk_config_example.env in the .zip (if you downloaded the container image from the support site) or you will have to download it from Github here (if you obtain the image through Harbor).

There is a sample file named sdk_config_example.env in api/utils/ that you can rename to sdk_config.env. This file is already populated with the default values, so the only parameters you have to modify are:

  • CHAT_PROVIDER
  • CHAT_MODEL
  • SQL_GENERATION_PROVIDER
  • SQL_GENERATION_MODEL
  • EMBEDDINGS_PROVIDER
  • EMBEDDINGS_MODEL
  • VDB_NAMES (optional). VDB_NAMES is the comma-separated list of Denodo databases that you want to expose to the AI SDK. It is set to samples_bank by default, so if you are using the sample banking dataset, you will not need to change it. If you want to expose your own database(s), please modify this parameter.

Here’s a screenshot of this section of the sample file:

As you can see, if we want to use OpenAI and model gpt-4o, we would fill out “openai” in the PROVIDER section, and the model name “gpt-4o” in the MODEL section.

Then, we would need to configure the credentials to access this model. In most providers, this is the API_KEY, but it depends on the provider.

Please check the Advanced configuration section to look for your provider of choice and how to configure it.

NOTE: For first-time use, it is recommended to have both SQL_GENERATION and CHAT be the same provider/model combination. Using different models is part of the advanced usage of the AI SDK.

Sample chatbot

The sample chatbot, on the other hand, is configured via another .env file located in sample_chatbot/chatbot_config.env.

NOTE: If using the container images, you will find the chatbot_config_example.env in the .zip (if you downloaded the container image from the support site) or you will have to download it from Github here (if you obtain the image through Harbor).

Again, for the basic configuration, you simply need to rename the sample file chatbot_config_example.env to chatbot_config.env and this will have the default parameters already populated. Then, the only thing you will need to configure is the LLM/embeddings to be used in the chatbot through the following parameters:

  • CHATBOT_LLM_PROVIDER
  • CHATBOT_LLM_MODEL
  • CHATBOT_EMBEDDINGS_PROVIDER
  • CHATBOT_EMBEDDINGS_MODEL

Execution

From source code

NOTE: The AI SDK includes a sample chatbot. You can execute both by executing the command python run.py both

Sample chatbot

With the sample chatbot, you’ll be able to test the AI SDK’s functionality with a simple-to-use UI.

To execute the sample chatbot, you must have the API running (or execute both, as per the command above) and completed the basic sample chatbot configuration.

With the sample data

The sample chatbot comes with a sample banking dataset that you can load onto your Denodo instance to get a feel for the AI SDK.

NOTE: If you have downloaded the AI SDK from GitHub, it is necessary to move the sample files in <AI_SDK_HOME>/sample_chatbot/sample_data/structured/ to <DENODO_HOME>/samples/ai-sdk/sample_chatbot/sample_data/structured/, as the demo data loader expects them to be in that filepath. If you are using the Denodo Express installer, the CSVs will already be in that folder.

To load the sample data, run the following command:

python run.py both --load-demo --host localhost --grpc-port 9994 --dc-port 9090 --server-id 1

NOTE: The –load-demo command only needs to be run the first time to load the demo data in your Denodo instance. After that, you can run python run.py both, directly.

This command assumes your Denodo instance is using host localhost, GRPC port 9994, and Tomcat port 9090 (the default installation values). Besides, it assumes that you want to create the sample data in the VDP server registered with identifier 1 in the Data Catalog (the VDP server registered by default during the installation). If this is not the case, you can modify the parameters to your current Denodo configuration.

Once the command has executed successfully, a ‘samples_bank’ database will be created in your VDP server, the cache of the VDP server will be enabled, the Data Catalog will be synchronized with the VDP server, the AI SDK will be launched (in the port 8008 by default) and the chatbot will be accessible through the following URL: http://localhost:9992

With the chatbot now running, you will find the logs in the logs/sample_chatbot.log file.

You will also be able to leverage the sample chatbot’s decision-making, by uploading a UTF-8 encoded CSV file that contains unstructured data.  To do this, simply click on the green button at the top-right of the header and upload your CSV file. You must include a description of the contents of the CSV file so that the LLM can correctly decide when it needs to search in the CSV file.

In the case of the banking dataset, there is a sample banking reviews CSV file located in <AI_SDK_HOME>/sample_chatbot/sample_data/unstructured/ that you can upload.

In the description field, you can write “Reviews by customers”.

The chatbot will then vectorize the CSV file and it will now decide whether to query the Denodo instance or the unstructured data to answer your questions.

Without the sample data

It is highly recommended to begin experimenting with the AI SDK using the provided sample banking dataset. However, if you have already done so and wish to experiment with your own dataset, we will now explain how to do it.

First, use either python run.py both to launch both the AI SDK and the sample chatbot or python run.py chatbot if the AI SDK is already running.

Then, the vector store will be automatically populated with the views of the databases configured in VDB_NAMES variable, the AI SDK will be launched (in the port 8008 by default) and the chatbot will be accessible through the following URL: http://localhost:9992.

Please note that this process may take a few minutes as the data is retrieved, sample data is generated and each view is vectorized.

NOTE: See the advanced configuration section to learn how to start the chatbot without populating the vector store (for example, because the chatbot has already been launched before and the metadata/data has not changed).

With the chatbot now running, you will find the logs in the logs/sample_chatbot.log file.

You will also be able to leverage the sample chatbot’s decision-making, by uploading a UTF-8 encoded CSV file that contains unstructured data.  To do this, simply click on the green button at the top-right of the header and upload your CSV file. You must include a description of the contents of the CSV file so that the LLM can correctly decide when it needs to search in the CSV file.

For example, for the sample banking dataset, we used “Reviews by customers”.

Denodo AI SDK

To execute the AI SDK standalone, you will need to have configured all the basic configuration parameters for the AI SDK in the api/utils/sdk_config.env file.

Once that is done, you can run python run.py api.

  • The Swagger documentation for the AI SDK API will be available in {API_HOST}/docs.
  • The AI SDK API logs can be found in the file logs/api.log.

There is a sample api/example.py file that showcases how to call the AI SDK endpoints.

From container

The image contains both the AI SDK and the sample chatbot. The only difference in execution is deciding whether to load the sample banking dataset into your Denodo instance or not. Before diving into the execution from container, make sure that:

  • You filled out the configuration files (sdk_config.env and chatbot_config.env) and moved them to a simple-to-remember filepath. For this example, we will use C:\share\sdk_config.env and C:\share\chatbot_config.env.
  • Data Catalog is running and synced with VDP.
  • Optional. If you want to use the sample banking dataset, you must first move it to your Denodo’s instance directory. To do so, it is necessary to move the sample files (the sample_data folder in the .zip you downloaded from the support site, or the sample_data folder available through Github here, if you are downloading from Harbor) to <DENODO_HOME>/samples/ai-sdk/sample_chatbot/sample_data/structured/, as the demo data loader expects them to be in that filepath.

NOTE: For the following commands, remember that if using the container image available in the .zip file that you obtained from the support site, you will need to load the Docker image instead of pulling from Harbor.

With the sample data

The sample chatbot comes with a sample banking dataset that you can load onto your Denodo instance to get a feel for the AI SDK.

NOTE: Since you are using the AI SDK container, it is necessary to move the sample files. However, if you are using the Denodo Express installer, the CSVs will already be in that folder.

To load the sample data, you first need:

  • Data Catalog host (from the container’s perspective, in this case we’ll use 192.168.1.1 as an example)
  • Data Catalog port (default: 9090)
  • VDP GRPC port (default: 9994)
  • VDP server id (default: 1)

Then, run (or modify if values are different) the following command:

docker run --rm -ti  -v /mnt/c/share/chatbot_config.env:/opt/ai-sdk/sample_chatbot/chatbot_config.env -v /mnt/c/share/sdk_config.env:/opt/ai-sdk/api/utils/sdk_config.env -p 8008:8008 -p 9992:9992 harbor.open.denodo.com/denodo-connects-9/images/ai-sdk:latest bash -c "python run.py both --load-demo --no-logs --host 192.168.1.1 --grpc-port 9994 --dc-port 9090 --server-id 1"

NOTE: The load demo command only needs to be run the first time to load the demo data in your Denodo instance. After that, you can run the container like in the next section.

Please review the python command being executed after the image. This command assumes your Denodo instance is using host localhost, GRPC port 9994, and Tomcat port 9090 (the default installation values). Besides, it assumes that you want to create the sample data in the VDP server registered with identifier 1 in the Data Catalog (the VDP server registered by default during the installation). If this is not the case, you can modify the parameters to your current Denodo configuration.

Once the command has executed successfully, a ‘samples_bank’ database will be created in your VDP server, the cache of the VDP server will be enabled, the Data Catalog will be synchronized with the VDP server, the AI SDK will be launched (in the port 8008 by default) and the chatbot will be accessible through the following URL: http://localhost:9992

You will also be able to leverage the sample chatbot’s decision-making, by uploading a UTF-8 encoded CSV file that contains unstructured data.  To do this, simply click on the green button at the top-right of the header and upload your CSV file. You must include a description of the contents of the CSV file so that the LLM can correctly decide when it needs to search in the CSV file.

In the case of the banking dataset, there is a sample banking reviews CSV file (in the .zip you downloaded from the support site or available through Github) that you can upload.

In the description field, you can write “Reviews by customers”.

The chatbot will then vectorize the CSV file and it will now decide whether to query the Denodo instance or the unstructured data to answer your questions.

You will also find the Swagger documentation for the AI SDK API will be available in {API_HOST}/docs.

Without the sample data

It is highly recommended to begin experimenting with the AI SDK using the provided sample banking dataset. However, if you have already done so and wish to experiment with your own dataset, we will now explain how to do it.

Then, run:

docker run --rm -ti  -v /mnt/c/share/chatbot_config.env:/opt/ai-sdk/sample_chatbot/chatbot_config.env -v /mnt/c/share/sdk_config.env:/opt/ai-sdk/api/utils/sdk_config.env -p 8008:8008 -p 9992:9992 harbor.open.denodo.com/denodo-connects-9/images/ai-sdk:latest

Then, the vector store will be automatically populated with the views of the databases configured in VDB_NAMES variable, the AI SDK will be launched (in the port 8008 by default) and the chatbot will be accessible through the following URL: http://localhost:9992.

Please note that this process may take a few minutes as the data is retrieved, sample data is generated and each view is vectorized.

You will also be able to leverage the sample chatbot’s decision-making, by uploading a UTF-8 encoded CSV file that contains unstructured data.  To do this, simply click on the green button at the top-right of the chatbot’s header and upload your CSV file. You must include a description of the contents of the CSV file so that the LLM can correctly decide when it needs to search in the CSV file.

For example, for the sample banking dataset, we used “Reviews by customers”.

You will also find the Swagger documentation for the AI SDK API will be available in {API_HOST}/docs.

Optimal performance

While it is true that up until now we have covered the basics of how to get the AI SDK working, there are a few factors to consider to be able to extract maximum performance from it. After all, the AI SDK is a chain of many different components and it will only be as powerful as its weakest links.

For this reason, please ensure that all of these factors are attended properly:

  • Metadata. The AI SDK will pass everything it can find in the schema of a view to the LLM. One of the advantages of using Denodo is that it can enrich a view’s schema in ways that can help the LLM better understand the purpose of said views. It’s important to fill out view and column descriptions, associate related views and provide logical names. Make use of Denodo’s features to enhance the LLM’s understanding of your data.
  • LLM. The LLM is the brains of the AI SDK. The better the LLM, the better the AI SDK’s performance. Striking a balance between speed and accuracy is key. Please check our recommended providers and model names in the AI SDK’s README file.
  • Embeddings. Another key component of the operation is the embeddings model. A more precise embeddings model will help the LLM choose the correct and relevant tables to answer your questions accurately. Again, you will find in the project’s README a list of recommended and tested providers and model names.
  • Advanced configuration. Custom instructions, set views, mode setting, graph generation… These are only a select few of the AI SDK’s capabilities that you will be able to fine-tune and leverage to obtain the most accurate results. We will review all of these characteristics in the following section.

Advanced configuration

AI SDK

These configuration settings are expected in the api/utils/sdk_config.env file. Please check out the api/utils/sdk_config_example.env template for more information.

  • AI_SDK_HOST. The host the AI SDK will run from.
  • AI_SDK_PORT. The port the AI SDK will execute from.
  • AI_SDK_SSL_CERT. If you want to use SSL in the AI SDK, set the relative path from root AI SDK folder to the certificate file here.
  • AI_SDK_SSL_KEY. If you want to use SSL in the AI SDK, set the relative path from root AI SDK folder to the key file here.
  • VDB_NAMES. The list of database names you want to expose in the vector store, separated by commas.
  • DATA_CATALOG_URL. The Data Catalog URL. Defaults to: http://localhost:9090/denodo-data-catalog/
  • DATA_CATALOG_AUTH_TYPE can take two values: http_basic or oauth.
  • DATA_CATALOG_METADATA_USER. The username of the user with metadata privileges to export the metadata and vectorize it.
  • DATA_CATALOG_METADATA_PWD. The password of the user with metadata privileges to export the metadata and vectorize it.
  • DATA_CATALOG_METADATA_OAUTH. If using OAuth, you can set the Bearer token here.
  • DATA_CATALOG_SERVER_ID. The identifier of the Virtual DataPort server you want to work with in the Data Catalog. Defaults to 1 (which is the identifier assigned to the default VDP server registered in Data Catalog during installation).
  • USER_PERMISSIONS. Whether to activate user permissions and filter exposed views based on the user executing the endpoint. Can be either 1 (true) or 0 (false).
  • DATA_CATALOG_VERIFY_SSL. If using SSL in the Data Catalog with an unsafe certificate, you can disable verification here.
  • CUSTOM_INSTRUCTIONS. Include domain-specific knowledge the LLM should use when selecting tables and generating queries.

Vector Stores

The AI SDK is compatible with 3 different vector store providers through their Langchain compatibility libraries:

  • Chroma
  • PGVector
  • OpenSearch

Chroma

Chroma will be used in persistent mode, meaning all changes will be logged locally. No configuration is needed.

PGVector

PGVector requires the following variable be set in the configuration file:

  • PGVECTOR_CONNECTION_STRING. For example, for the demo PGVector Langchain instance, the value of this variable would be

postgresql+psycopg://langchain:langchain@localhost:6024/langchain

        where:

postgresql+psycopg://{user}:{pwd}@{host}:{port}/{db_name}

OpenSearch

OpenSearch requires the following variables be set in the configuration file:

  • OPENSEARCH_URL. The URL of the OpenSearch instance. Defaults to http://localhost:9200
  • OPENSEARCH_USERNAME. The username of the OpenSearch instance. Defaults to admin.
  • OPENSEARCH_PASSWORD. The password of the OpenSearch instance. Defaults to admin.

LLMs/Embeddings

The AI SDK is compatible with the following LLM/Embeddings providers through their Langchain compatibility libraries.

NOTE: The model used in the _MODEL variable is the same model ID you would use when making an API call to that model provider. For example, claude-3-sonnet-20240229 in Anthropic and anthropic.claude-3-5-sonnet-20240620-v1:0 in AWS Bedrock refer to the same model, Claude 3.5 Sonnet v1. You should refer to the provider-specific model ID in the configuration file.

OpenAI (LLM/Embeddings)

_PROVIDER value: openai

OpenAI requires the following variables be set in the configuration file:

  • OPENAI_API_KEY. Defines the API key for your OpenAI account.
  • OPENAI_PROXY. Proxy to use when contacting the OpenAI servers. Set as http://{user}:{pwd}@{host}:{port} format.
  • OPENAI_ORG_ID. OpenAI organization ID. If not set it will use the default one set in your OpenAI account.

AzureOpenAI (LLM/Embeddings)

_PROVIDER value: azureopenai

For AzureOpenAI, please set the deployment name in the CHAT_MODEL/SQL_GENERATION_MODEL variables. The model (deployment) name used will be appended to the Azure OpenAI endpoint, like this /deployments/{CHAT_MODEL}

AzureOpenAI requires the following variables be set in the configuration file:

  • AZURE_OPENAI_ENDPOINT. Defines the connection string to your AzureOpenAI instance. For example, https://example-resource.openai.azure.com/
  • AZURE_API_VERSION. Defines the connection string and version to your AzureOpenAI instance. For example, 2024-06-01.
  • AZURE_OPENAI_API_KEY. Defines the API key for your AzureOpenAI account.
  • AZURE_OPENAI_PROXY. AzureOpenAI proxy to use. Set as http://{user}:{pwd}@{host}:{port} format

Google (LLM/Embeddings)

_PROVIDER value: google

Google Vertex AI requires a service account JSON file, where the AI SDK requires the following variables be set in the configuration file:

  • GOOGLE_APPLICATION_CREDENTIALS. Defines the path to the JSON storing your Google Cloud service account.

AWS Bedrock (LLM/Embeddings)

_PROVIDER value: bedrock

AWS Bedrock requires a service account to be run with the AI SDK. Once that is obtained, the following variables must be set in the configuration file:

  • AWS_REGION. The region where you want to execute the model in. For example, us-east-1.
  • AWS_PROFILE_NAME.
  • AWS_ROLE_ARN.
  • AWS_ACCESS_KEY_ID.
  • AWS_SECRET_ACCESS_KEY.

Groq (LLM)

_PROVIDER value: groq

Groq requires the following variables be set in the configuration file:

  • GROQ_API_KEY. The API Key to the GROQ provider.

NVIDIA NIM (LLM/Embeddings)

_PROVIDER value: nvidia

NVIDIA NIM requires the following variables be set in the configuration file:

  • NVIDIA_API_KEY. The API Key to your NVIDIA NIM instance.
  • NVIDIA_BASE_URL (Optional). If self-hosting NVIDIA NIM, set the base url here, like "http://localhost:8000/v1"

Mistral (LLM/Embeddings)

_PROVIDER value: mistral

Mistral requires the following variables be set in the configuration file:

  • MISTRAL_API_KEY. The API Key for your Mistral service.

Ollama (LLM/Embeddings)

_PROVIDER value: ollama

There is no specific configuration for Ollama, but:

  • You must have the model already installed first through Ollama.
  • You must use the same model ID in _MODEL as the one you use in Ollama.

There's no need to execute 'ollama run <model-id>' before launching the AI SDK.

Sample chatbot

These configuration settings are expected in the sample_chatbot/chatbot_config.env file. Please check out the sample_chatbot/chatbot_config_example.env template for more information.

  • CHATBOT_HOST. Here you can specify the host from where the chatbot will launch.
  • CHATBOT_PORT. The chatbot’s port.
  • CHATBOT_OVERWRITE_ON_LOAD. Whether to overwrite the vector store everytime the chatbot is launched. Takes either 1 (true) or 0 (false).
  • CHATBOT_EXAMPLES_PER_TABLE. Defaults to 3, this is the amount of sample rows that will be extracted for each view, to be passed to the LLM to understand the data better.
  • CHATBOT_LLM_PROVIDER. The LLM provider for the chatbot.
  • CHATBOT_LLM_MODEL. The LLM model id for the specified provider for the chatbot.
  • CHATBOT_EMBEDDINGS_PROVIDER. The embeddings provider for the chatbot.
  • CHATBOT_EMBEDDINGS_MODEL. The model id for the embeddings model for the specified provider.
  • AI_SDK_HOST. The AI SDK base URL. Defaults to: http://localhost:8008
  • AI_SDK_USERNAME. The username of the user who will be using the chatbot. Can have limited privileges to access only specific views.
  • AI_SDK_PASSWORD. The password of the user who will be using the chatbot. Can have limited privileges to access only specific views.

Answer Question Endpoints

/answerQuestion

This is the main endpoint for processing natural language questions about your data. It automatically determines whether to use SQL or metadata to answer the question. If you know that you want to query data or metadata directly, you can either set the mode parameter in this endpoint to “data”/ “metadata” or you can call the answerDataQuestion/answerMetadataQuestion endpoints directly.

Parameters

  • question (string, required): The natural language question to be answered
  • plot (boolean, default: false): Whether to generate a plot with the answer
  • plot_details (string, default: ''): Additional plotting instructions
  • embeddings_provider (string): Provider for embeddings generation (defaults to env var)
  • embeddings_model (string): Model to use for embeddings (defaults to env var)
  • vector_store_provider (string): Vector store to use (defaults to env var)
  • sql_gen_provider (string): Provider for SQL generation (defaults to env var)
  • sql_gen_model (string): Model for SQL generation (defaults to env var)
  • chat_provider (string): Provider for chat completion (defaults to env var)
  • chat_model (string): Model for chat completion (defaults to env var)
  • vdp_database_names (string): Comma-separated list of databases to search in
  • use_views (string, default: ''): Specific views to use for the query. For example, “bank.loans, bank.customers”
  • expand_set_views (boolean, default: true): Whether to expand the previously set views. If true, then the context for the question will be “bank.loans, bank.customers” + the result of the vector store search. If set to false, it will be only those 2 views (or whatever views are set in use_views).
  • custom_instructions (string): Additional instructions for the LLM
  • markdown_response (boolean, default: true): Format response in markdown
  • vector_search_k (integer, default: 5): Number of similar tables to retrieve
  • mode (string, default: "default"): One of "default", "data", or "metadata"
  • disclaimer (boolean, default: true): Whether to include disclaimers
  • verbose (boolean, default: true): Include detailed information in response

/answerDataQuestion

Identical to /answerQuestion but specifically for questions that require querying data (forces data mode).

It is faster than using default mode on answerQuestion because it doesn’t have to decide if it needs data or metadata.

/answerMetadataQuestion

Identical to /answerQuestion but specifically for questions that require querying metadata (forces metadata mode).

It is faster than using default mode on answerQuestion because it doesn’t have to decide if it needs data or metadata.

/answerQuestionUsingViews

Similar to /answerQuestion but accepts pre-selected views instead of performing vector search.

This endpoint is ONLY intended for clients that want to implement their own vector store provider and not one supported by the AI SDK. To force the LLM to consider a set of views, please review the use_views parameter in answerQuestion.

Additional/Modified Parameters:

  • vector_search_tables (list of strings, required): List of pre-selected views to use

Does not use vdp_database_names, use_views, or expand_set_views.

Streaming Endpoints

/streamAnswerQuestion

Streaming version of /answerQuestion that returns the answer as a text stream.

Parameters are identical to /answerQuestion.

/streamAnswerQuestionUsingViews

Streaming version of /answerQuestionUsingViews that returns the answer as a text stream.

Parameters are identical to /answerQuestionUsingViews.

Metadata and Search Endpoints

/getMetadata

Retrieves metadata from specified VDP databases and optionally stores it in a vector database.

Parameters

  • vdp_database_names (string): Comma-separated list of databases
  • embeddings_provider (string): Provider for embeddings generation
  • embeddings_model (string): Model to use for embeddings
  • vector_store_provider (string): Vector store to use
  • examples_per_table (integer, default: 3): Number of example rows per table
  • descriptions (boolean, default: true): Include table descriptions
  • associations (boolean, default: true): Include table associations
  • insert (boolean, default: false): Store metadata in vector store
  • overwrite (boolean, default: false): Overwrite existing vector store data

/similaritySearch

Performs similarity search on previously stored metadata.

Parameters

  • query (string, required): Search query
  • vdp_database_names (string): Comma-separated list of databases to search
  • embeddings_provider (string): Provider for embeddings generation
  • embeddings_model (string): Model to use for embeddings
  • vector_store_provider (string): Vector store to use
  • n_results (integer, default: 5): Number of results to return
  • scores (boolean, default: false): Include similarity scores

/getConcepts

Extracts concepts from a question and returns their embeddings.

Parameters

  • question (string, required): Question to extract concepts from
  • chat_provider (string): Provider for concept extraction
  • chat_model (string): Model for concept extraction
  • embeddings_provider (string): Provider for embeddings generation
  • embeddings_model (string): Model to use for embeddings

Authentication

All endpoints support two authentication methods:

  • HTTP Basic Authentication
  • OAuth Bearer Token Authentication

The authentication method is determined by the DATA_CATALOG_AUTH_TYPE environment variable. It can be set to either “http_basic” or “oauth”.

Environment Variables

To avoid including the parameters constantly in every call, you can set the following environment variables to have the API default to them, if not specified.

  • EMBEDDINGS_PROVIDER
  • EMBEDDINGS_MODEL
  • VECTOR_STORE
  • SQL_GENERATION_PROVIDER
  • SQL_GENERATION_MODEL
  • CHAT_PROVIDER
  • CHAT_MODEL
  • VDB_NAMES
  • CUSTOM_INSTRUCTIONS

Response Format

Most endpoints return a JSON response containing:

  • answer: The generated answer
  • sql_query: The SQL query used (if applicable)
  • query_explanation: Explanation of the query
  • tokens: Token usage information
  • execution_result: Query execution results
  • related_questions: Suggested follow-up questions
  • tables_used: Tables used in the query
  • raw_graph: Graph data in Base64 SVG format (if applicable)
  • Various timing information

NOTE: Streaming endpoints return a text stream containing only the answer.

Examples

Sample chatbot architecture

The sample chatbot uses a custom-coded tool flow. It is custom-coded, because our goal is to support all the main LLM providers. Most of them already have their own version of tool (function) calling, but its accuracy and success varies greatly across their own library of models.

Our sample chatbot architecture performs near-perfect tool calling across the whole spectrum of large language models, without the need to rely on the provider’s own functionality.

Our sample chatbot has 3 tools currently implemented (can be expanded):

  • Database query tool. This tool calls the answerDataQuestion endpoint, with the parameters being set by the chatbot LLM.
  • Metadata query tool. This tool calls the answerMetadataQuestion endpoint, with the parameters being set by the chatbot LLM, same as before.
  • Knowledge base query tool. This tool performs a similarity search on the unstructured CSV uploaded to the chatbot.

The information regarding the tools the LLM can use to answer a question are appended at the end of each user question. Then,

  1. An initial response is generated by the LLM, calling the specific tool with the necessary parameters using XML tags.
  2. The tool is executed and the output is passed to the LLM, substituting the tool usage information with the tool output. This final response is then shown to the user.

Integrating into your own chatbot solution

To integrate the Denodo AI SDK as an agent or tool as part of your own solution, you can call the API endpoints.

NOTE: Before calling any of the endpoints, you must have inserted the VDP database(s) information into a compatible vector store. To do this, call the getMetadata endpoint with insert = True and overwrite = True. After the getMetadata is generated, it is your choice to force overwrite or not.

Here is a simple example using the Python requests library. We are going to call the answerQuestion endpoint with the corresponding parameters. This endpoint takes a natural language question in and returns a natural language response out.

First, we load the vector store with the views from database ‘bank’.

Once this is done, we can now use answerQuestion (or any other endpoint, for that matter).

For the question ‘How many loans have we have’, in our database, we get the following response:

As we can see, the LLM answered that we have 23 loans. To reach this conclusion, it generated the query SELECT COUNT(*) FROM bank.loans and executed it.

You can find more examples in the api/example.py file.

Frequently Asked Questions (FAQ)

Upgrading the AI SDK

When upgrading the AI SDK, please make sure to follow these steps:

  • Back up the AI SDK configuration files (api/utils/sdk_config.env) and the sample chatbot configuration files (sample_chatbot/chatbot_config.env). Also, back up the vector store folder (to avoid regenerating the metadata) in /ai_sdk_vector_store (if it exists).
  • Delete the previous AI SDK’s python virtual environment or create a new one.
  • Install the new AI SDK’s dependencies in the new python virtual environment.
  • Bring over the old configuration files and the vector store folder into the new AI SDK directory.

You’re set!

Python and Ubuntu installation common problems

When installing Python in Ubuntu, you might encounter some of the following problems:

  • Python installation failed due to SSL/SQLite dependencies missing from OS. In this case, executing the following command solved the issue:

apt install zlib1g zlib1g-dev libssl-dev libbz2-dev libsqlite3-dev

  • Requirements install failed due to Failed to build chroma-hnswlib. This sometimes occurs in Ubuntu 22.04 and Python 3.12. Upgrading to Ubuntu 24.04 (or downgrading to Python 3.11) solved the issue.

Chroma requires a more recent SQLite version

SQLite is included in the Python installation. If your Python installation comes bundled with an old SQLite version, you might need to update it.

Please refer to the following https://docs.trychroma.com/troubleshooting#sqlite.

Using the AI SDK with a Denodo version lower than 9.0.5

While it is possible to use the AI SDK with a Denodo version lower than 9.0.5 (not included), some functionality may be limited:

  • User permissions. To be able to filter by user permissions, you will need a specific hotfix for any version below 9.0.5.
  • Setting examples_per_table to 0 will cause a bug. If you don’t want to provide the LLMs with example data to better understand your dataset, you can set examples_per_table to 0, but this will cause a crash in versions under 9.0.5. You will need the same hotfix as the user permissions to avoid this.

Installing Python requirements.txt takes longer than 15 minutes

The requirements.txt provided comes with all dependencies set and has been tested on Python 3.10 and 3.11 on both Windows and Linux. However, sometimes Python might take longer than usual to install the dependencies.

If this is the case, please update pip (Python’s package installer).

When should I use streaming endpoints vs non-streaming endpoints?

Streaming endpoints are designed to be used with user interfaces (UIs). They allow for real-time updates as the response is generated, providing a more interactive experience for the user.

Non-streaming endpoints are better suited for programmatic access or when you’re fine with receiving the complete response at once.

How can I debug if something goes wrong?

If you encounter issues, you can check the following log files to help diagnose the problem:

1. AI SDK API log file: This log file (located in logs/api.log) contains information about the API server's operations.

2. Sample chatbot log: This log file (located in logs/sample_chatbot.log) provides details about the chatbot's activities.

3. Denodo Data Catalog log: Located in {DENODO_HOME}/logs/vdp-data-catalog/ within the Denodo installation folder, this log can provide insights into Data Catalog-related issues.

Reviewing these log files can help you identify error messages, warnings, or other relevant information to troubleshoot the problem.

How can I use two different custom providers for LLM and embeddings?

NOTE: This requires v0.4 of the AI SDK.

The SDK supports custom provider for LLM and embeddings as long as it offers a OpenAI-API compatible API endpoint. You can configure the custom providers by following these steps:

  1. Set the provider for the LLM models as a custom name (not OpenAI). For example, provider1.
  2. Set the provider for the embeddings as a custom name (not OpenAI). For example, provider2.
  3. Set PROVIDER1_BASE_URL, PROVIDER1_API_KEY, PROVIDER1_PROXY (optional) and PROVIDER2_BASE_URL, PROVIDER2_API_KEY, PROVIDER2_PROXY (optional) in the configuration files.

PROVIDER1 must be the full capital letters version of the custom provider name set in _PROVIDER.

Can I use a different provider for each component (Chat LLM/SQL Generation LLM/Embeddings)?

Yes, absolutely. For speed improvements, it might be a good idea to have a “less powerful” LLM for the “verbosity” aspect of the AI SDK and a more powerful model for the actual SQL generation part of the AI SDK.

Can I avoid writing the log to the filesystem?

To avoid generating log files, you can send the output of the AI SDK/chatbot to the console by adding the –no-logs flag to the python run.py command:

python run.py both --no-logs

If I don't want to use sample data, which privileges does my Metadata Sync User need?

For a user to have sufficient privileges to get metadata from Denodo, it must have:

  • Connection privilege in the admin database within Denodo.
  • Connect privilege in your target database within Denodo.
  • Metadata privilege in your target database within Denodo.