Denodo Helm Charts Quick Start Guide

Download original document


Introduction

This document is a quick start guide that describes the steps to deploy the Denodo Platform and the Solution Manager using Helm Charts distributed by Denodo.

Note that these Helm charts are compatible with the update 20230301 and later.

Prerequisites

In order to use the Helm Charts to successfully deploy the Denodo Platform and Solution Manager in Kubernetes, the following prerequisites have to be in place:

  • A Kubernetes Cluster
  • Installed and Configured Helm CLI. Helm can be installed either from source or from pre-built binary release. Helm Official Documentation helps to understand the steps required to install the Helm CLI
  • Access to the Harbor Registry that holds the images and Helm Charts with the CLI Secret created

Creating CLI Secret

After being authenticated through the Denodo Account and logged into the web interface for the first time, you need to use either Docker CLI or Helm CLI to access the registry and artifacts. In order to do this, it is necessary to create a CLI secret in the Harbor registry.

In order to create the secret, please click the username on the top right corner of the screen and select User Profile, that pops up the dialog box to create the CLI secret.


        

In the User Profile popup dialog box (mentioned below), click on the “GENERATE SECRET” button to create the CLI secret

Note: A user can have only one CLI secret. If a new secret is generated, the old secret would be invalidated. This CLI secret would be used at a later stage to authenticate the image pull.

Use Cases

This article helps to understand and implement the following two use cases of the Denodo Platform

  • Solution Manager deployment in a Kubernetes Cluster.
  • Use Denodo distributed Helm charts to deploy the Solution Manager inside the Kubernetes Cluster where the server is SSL/TLS enabled and the metadata stored in an external database.

  • Denodo Platform deployment in a Kubernetes Cluster.
  • Use Denodo distributed Helm charts to deploy the Denodo Platform inside the Kubernetes Cluster where the servers are SSL/TLS enabled and the metadata stored in an external database.

Overall Deployment Steps

Below is the general overview of the steps that have to be followed to implement the mentioned use cases to deploy the Solution Manager and the Denodo Platform inside the Kubernetes cluster.

  1. Follow the below pre-installation steps:
  1. Namespace creation
  2. Image configuration
  3. License configuration
  4. SSL/TLS configuration
  1. Prepare the customized YAML configuration(e.g custom-values.yaml) file with values that meet the use case requirement.
  2. Install the Helm Chart of the respective component using the customized YAML configuration file created in the previous step
  3. Upon successful installation, verify if the resources such as pods and services have been created in the cluster. This can be checked using the kubectl command.

Deployment of the Solution Manager

Initial Steps

It is necessary to perform the below steps before installing the Solution Manager Helm charts in the Kubernetes cluster

  1. Create a namespace for the Solution Manager infrastructure.
  2. Image configuration.
  • Configure the credentials registry.
  • Configure the specific version of the image.
  1. License configuration.
  • Upload the license file into a secret.
  1. SSL/TLS configuration.

Create the namespace

In order to create a namespace inside a Kubernetes cluster, the kubectl utility has to be executed using the following syntax:

$ kubectl create namespace <namespace>

For example, to create the namespace called “solution-manager”, the below command needs to be executed

$ kubectl create namespace solution-manager

Image Configuration

The default image of the Solution Manager is available in the harbor registry which is a private registry accessible through the Denodo account and requires a CLI secret to access. This secret was created in the prerequisites section of this document and it needs to be copied and used to create a secret in the Kubernetes cluster under the namespace created for the Solution Manager.

In order to create a secret inside the cluster with docker registry, the kubectl utility has to be executed using the following syntax:

$ kubectl create secret docker-registry <secret name> \

--docker-server=<server location for docker registry> \

--docker-username=<username for docker registry authentication> \

--docker-password=<password for docker registry authentication> \

--namespace <namespace into which the secret has to be created>

For example, you can use the following command to create a secret named “harbor-registry-secret” from the harbor registry server authenticated using the Denodo account user “xxxxx” and CLI secret as password inside the Solution Manager namespace “solution-manager”:

$ kubectl create secret docker-registry harbor-registry-secret \

--docker-server=harbor.open.denodo.com \

--docker-username=xxxxxx \

--docker-password=xxxxx \

--namespace solution-manager

After creating the secret “harbor-registry-secret”, its name will be used in the image pull section of the yaml specifications(manifest) for the property image.pullSecrets which will be later explained in this document.

On the other hand, if you would like to use a custom image instead of the default image provided by Denodo in the harbor registry, this image can be edited using the properties image.repository and image.tag in the image section of the manifest. For a private registry, the image registry credentials from the secret name into the image.pullSecrets property. More information about pulling an image from the private registry can be found in the official documentation of Kubernetes.

Note: In order to generate a custom image for Openshift(for example), the steps mentioned here can be followed.

License Configuration

By default, the Solution Manager uses a Secret called solution-manager-license to retrieve the license file. It is also possible to provide a different name for the secret but the name of this secret needs to be edited in the property license.secret of the Solution Manager manifest.

In order to upload the path of the Denodo license file as the generic secret called solution-manager-license into a namespace, the kubectl utility has to be executed using the below syntax:

$ kubectl create secret generic <secret name> --from-file=denodo.lic=<path-to-denodo-license-file> --namespace <namespace into which the secret has to be created>

For example, below command can be used to create the secret called “solution-manager-license” to fetch the license file from the path (/mnt/c/denodo/denodo.lic) in the solution manager namespace

$ kubectl create secret generic solution-manager-license --from-file=denodo.lic=/mnt/c/denodo/denodo.lic --namespace solution-manager

SSL/TLS Configuration

TLS will be configured using:

  • PEM-encoded key and certificate
  • PKCS12 keystore

Note: Ensure that the certificates are correct. The request hostname(could be the service name) should match with certificate’s Common Name(CN) or Subject Alternative Name (SAN)

PEM-encoded key and certificate:

The key and the certificate can be loaded using secrets(similar to the previous step) inside the cluster. In order to create this secret named “certificates-tls-secret” to load the certificate and keys from the files tls.key and tls.crt, the following command can be executed.

$ kubectl create secret generic certificates-tls-secret --from-file=./tls.crt --from-file=./tls.key --namespace solution-manager

Then, in order to import the certificates into the JVM cacerts, their files can be added to the same secret by executing the following command:

$ kubectl create secret generic certificates-tls-secret --from-file=./tls.crt --from-file=./tls.key --from-file=./import-cert-1.cer --from-file=./import-cert-2.pem --namespace solution-manager

Upon creating the secret for the TLS configuration using PEM-encoded key and certificates, the parameters related to TLS can be used with the appropriate values in the manifest. For example, in this case, the TLS section of the manifest would look like below:

tls.enabled=true

tls.secret="certificates-tls-secret"

tls.certFilename="tls.crt"

tls.certKeyFilename="tls.key"

tls.importCerts: ["import-cert-1.cer", "import-cert-2.pem"]

Here, the parameter tls.certFilename indicates the entry name for the certificate file uploaded into the secret and the parameter tls.certKeyFilename indicates the entry name for the key file uploaded into the secret

In general, SSL/TLS configuration can be achieved in Denodo by executing the SSL/TLS configuration script (denodo_tls_configurator.sh) located under the <DENODO_HOME>/bin directory.

PKCS12 keystore

The keystore in PKCS12 can be loaded using secrets inside the cluster. For example, in order to create this secret named “pkcs12-tls-crt”, the below command can be executed in the cluster:

$ kubectl create secret generic pkcs12-tls-secret --from-file=./tls.p12 --namespace solution-manager

Then, in order to import the certificates into the JVM cacerts, their files can be added to the same secret by executing the following command:

$ kubectl create secret generic pkcs12-tls-secret --from-file=./tls.p12 --from-file=./import-cert-1.cer --from-file=./import-cert-2.pem --namespace solution-manager

Upon creating the secret, the TLS parameters of the manifest can be provided with appropriate values as mentioned below:

tls.enabled=true

tls.secret="pkcs12-tls-secret"

tls.type="pkcs12"

tls.pkcs12Filename="tls.p12"

tls.pkcs12Password="<encrypted_keystore_password>"

tls.pkcs12PasswordEncrypted="true"

tls.importCerts: ["import-cert-1.cer", "import-cert-2.pem"]

Note:

  • The keystore password(value of tls.pkcs12Password) has to be encrypted using the script <DENODO_HOME>/bin/encrypt_password.sh 
  • tls.pkcs12Filename indicates the entry name for the pkcs12 file uploaded into the secret

Solution Manager Architecture

The architecture below depicts the scenario that has been considered to deploy the Solution Manager inside the Kubernetes Cluster using Helm charts.

The diagram indicates that the Solution Manager components have been created in the Solution Manager namespace under a Kubernetes Cluster.The License Manager and Solution Manager have been deployed as independent pods while the License Manager configured to be deployed as two pods(i.e two replicas). Their metadata has been configured to be  stored in an external database.

Both components are accessible externally through their respective services configuration. As seen earlier, few secrets have been created inside the Solution Manager namespace of the Kubernetes cluster.

Install the Solution Manager Helm Chart

The command helm install is used to install a helm chart in a cluster and there are multiple ways of using this command. The install argument could be a chart reference, a path to a packaged chart, a path to an unpacked chart directory, a URL or an OCI registry.

Syntax:  helm install [NAME] [CHART] [flags]

In order to install the Solution Manager helm chart located in the OCI registry(harbor) in a namespace, the below command template has to be executed:

$ helm install <release name> oci://harbor.open.denodo.com/denodo-8.0/charts/solution-manager --version <version> --values <custom-values.yaml> --namespace <namespace>

where:

<release name>: arbitrary name for this release.

<version>: version constraint for the chart version to use.

<custom-values.yaml>: yaml file with customized/overridden values for parameters that needs to be considered while installing the chart.

<namespace>: namespace where the chart has to be installed.

Custom Values

In the above example, the “custom-value.yaml” file has been passed as an argument which indicates that some of the parameters of the manifest have to be overridden to execute the use case.

NOTE: Sample “custom-values.yaml” specification file (solution-manager-custom-values.yaml) for this deployment scenario can be downloaded from the Appendix: Sample YAML files section of this document.

Details on the different parameters  in order to build a <custom-values.yaml> file are included in the table below:

Parameter

Sample Value

Description

image.pullSecrets

[harbor-registry-secret]

Specify docker-registry secret names as an array. Use the name created in the section Image Configuration

licenseManager.enabled

true

Enable it to use independent Pods for the License Manager Server

licenseManager.replicaCount

2

Increase this value to launch more replicas of License Manager

database.enabled

true

To set up external database metadata configuration

database.provider

POSTGRE_SQL

Specify the database provider. Possible values: “EMBEDDED_DERBY”, “DERBY”, “ORACLE_11G”, “ORACLE_12C”, “SQL_SERVER”, “MYSQL”, “POSTGRE_SQL”

database.uri

“jdbc:postgresql://postgres-service.default.svc.cluster.local:5432/solutionmanager”

Specify the database uri

database.driver

"org.postgresql.Driver"

Specify the database driver

database.user

<username>

Specify the database user name

database.password

<encrypted password>

Specify the database password encrypted with <DENODO_HOME>bin/encrypt_password.sh  <clear_password>

database.passwordEncrypted

“true”

Indicates if the password is encrypted

libExtensions.copy.jdbcDrivers

postgresql-12

Specify the folder into <DENODO_HOME>/lib/extensions/jdbc-drivers to be copied into /opt/denodo/lib/solution-manager-extensions. Some valid values are: “oracle-21c”, “postgresql-12”, “mssql-jdbc-9.x”, ...

solutionManager.auth.username

<username>

To change the admin user and password, specify the username

solutionManager.auth.password

<encrypted password>

Specify the solution manager password encrypted it with <DENODO_HOME>bin/encrypt_password.sh  <clear_password>

solutionManager.auth.passwordEncrypted

true

Specify as “true” if the password for the custom admin user is encrypted

tls.enabled

true

To enable TLS

tls.secret

“pkcs12-tls-secret”

Specify the secret where the certificates are stored. Please refer the SSL/TLS Configuration section

tls.keystorePassword

<encrypted password>

Specify the keystore password encrypted

tls.keystorePasswordEncrypted

true

Indicates that keystore password is encrypted

tls.type

“pkcs12”

Specify the type of the certificates pkcs12 to be use a PKCS12 bundle, empty for default crt and key in PEM

tls.certFilename

"tls.crt"

Specify the filename of the certificate

tls.certKeyFilename

"tls.key"

Specify the filename of the key

tls.certChainFilename

[]

Specify the filename of the chain certificates

tls.pkcs12Filename

"tls.p12"

Specify the filename of the pkcs12 keystore and tls.type should be pkcs12

tls.pkcs12Password

<encrypted password>

Specify the encrypted password of the pkcs12 keystore and tls.type should be pkcs12

tls.pkcs12PasswordEncrypted

true

Specify whether the pkcs12 password is encrypted or not

tls.importCerts

[]

To import certificates located by these filenames into the JVM cacerts

service.type

Cluster IP or LoadBalancer or NodePort

Kubernetes Service type for Solution Manager. The LoadBalancer type creates an external IP to access the Service

licenseManager.service.type

Cluster IP or LoadBalancer or NodePort

Kubernetes Service type for License Manager. The LoadBalancer type creates an external IP to access the Service

Note: Ingress Configuration is optional and it depends on the final implementation class for it nginx, traefik, etc … The access to the solution might be done directly through the services

Execute Solution Manager Helm Chart

Upon performing the above prerequisites and initial steps , the next step is to install the Helm Chart to deploy the Solution Manager in the Kubernetes cluster using the customized values(solution-manager-custom-values.yaml) for a few parameters pertaining to this use case.

Step 1: Execute the following helm command in the CLI:

$ helm install solutionmanager oci://harbor.open.denodo.com/denodo-8.0/charts/solution-manager --version 1.0.1 --values solution-manager-custom-values.yaml --namespace solution-manager

The above command deploys the Solution Manager Helm Chart located in the harbor registry(OCI) inside the namespace called “solution-manager” using the overridden values provided in the “solution-manager-custom-values.yaml” file for parameters related to TLS and external metadata database configuration and names this release as “solutionmanager”

Step 2: Execute the following kubectl command to list the pods created inside the “solution-manager” namespace.

$ kubectl get pods --namespace solution-manager

Step 3: Execute the following command to list the services created in the “solution-manager” namespace.

$ kubectl get services --namespace solution-manager

Deployment of the Denodo Platform

Initial Steps

It is necessary to perform the below steps before installing the Denodo Platform helm charts in the Kubernetes cluster. These steps are similar to what were performed in the Initial Steps section of the Solution Manager deployment except that the steps needs to be performed in the Denodo Platform namespace

  1. Create a namespace for the Denodo Platform infrastructure.
  2. Image configuration.
  • Configure the credentials registry.
  • Configure the specific version of the image.
  1. License configuration.
  • Upload the license file into a secret.
  • Configure a License Manager Service.
  1. SSL/TLS configuration.

Create the namespace

In order to create a namespace inside a Kubernetes cluster, the kubectl utility has to be executed using the following syntax:

$ kubectl create namespace <namespace>

For example, in order to create the namespace called “denodo-platform” to manage the Denodo Platform resources, the below command needs to be executed:

$ kubectl create namespace denodo-platform

Image Configuration

The default image of the Denodo Platform is available in the harbor registry which is a private registry accessible through the Denodo account and requires a CLI secret to access. This secret was created in the prerequisites section of this document and it needs to be copied and used to create a secret in the Kubernetes cluster under the namespace created for the Denodo Platform.

In order to create a secret inside the cluster with the Docker registry, kubectl utility has to be executed using the following syntax:

$ kubectl create secret docker-registry <secret name> \

--docker-server=<server location for docker registry> \

--docker-username=<username for docker registry authentication> \

--docker-password=<password for docker registry authentication> \

--namespace <namespace into which the secret has to be created>

For example, you can use the following command to create a secret named “harbor-registry-secret” from the harbor registry server authenticated using the Denodo account user “xxxxx” and CLI secret as password inside the Denodo Platform namespace “denodo-platform

$ kubectl create secret docker-registry harbor-registry-secret \

--docker-server=harbor.open.denodo.com \

--docker-username=xxxxxx \

--docker-password=xxxxxx \

--namespace denodo-platform

After creating the secret “harbor-registry-secret”, its name will be used in the image pull section of the yaml specifications(manifest) for the property image.pullSecrets which will be later explained in this document.

On the other hand, if you would like to use a custom image instead of the default image provided by Denodo in the harbor registry, this image can be edited using the properties image.repository and image.tag in the image section of the manifest. For a private registry, the image registry credentials from the secret name into the image.pullSecrets property. More information about pulling an image from the private registry can be found in the official documentation of Kubernetes.

Note: In order to generate a custom image for Openshift(for example), the steps mentioned here can be followed.

License Configuration

In order to retrieve the license file, Denodo Platform can use one of the following options:

  • Secret stored in Kubernetes cluster.
  • License Manager Server configuration.

Secret

By default, the Denodo Platform uses a Secret called denodo-license to retrieve the license file. It is also possible to provide a different name for the secret but the name of this secret needs to be configured in the property license.secret of the Denodo Platform manifest.

In order to upload the path of the Denodo license file as the generic secret called denodo-license into a namespace, kubectl utility has to be executed using the below syntax:

$ kubectl create secret generic <secret name> --from-file=denodo.lic=<path-to-denodo-license-file> --namespace <namespace into which the secret has to be created>

For example, below command can be used to create the secret called “denodo-license” to fetch the license file from the path (/mnt/c/denodo/denodo.lic) in the Denodo Platform namespace.

$ kubectl create secret generic denodo-license --from-file=denodo.lic=/mnt/c/denodo/denodo.lic --namespace denodo-platform

License Manager Server Configuration

In order to use License Manager Server to retrieve license information, it is necessary to configure its URL in the following values in the license section of the YAML specifications (manifest).

Parameter

Sample Value

Description

license.proto

http/https

License Manager configuration protocol. With https, it is required to  import the correct certificates using the `tls.importCerts` parameter

license.host

<hostname>

License Manager configuration host

license.port

10091(by default)

License Manager configuration port

Note: In order to retrieve the license from the License Manager server, the server has to be registered previously in the Solution Manager. This process can be automated by following the steps provided in the Solution Manager Auto Registration section.

How to retrieve the License Manager host information

The Denodo Platform uses a License Manager to retrieve the license and requires connectivity between the License Manager and itself. If there is a reachable external hostname to access the License Manager, then it could be used.

In the case of a License Manager Server running in the same Kubernetes cluster but in a different namespace, then the License Manager can be accessed through its service. To achieve this, it is necessary to compose the host value with the following nomenclature.

<name of the license manager service>.<name of the solution manager namespace>.svc.cluster.local

The below command can be executed to retrieve the service information to find the License Manager service name that resides in the Solution Manager namespace “solution-manager”.

 $ kubectl get services -n solution-manager

For example, if you need to access the License Manager service “license-manager-service” that runs in the Solution Manager namespace “solution-manager”, the hostname can be composed like below:

license-manager-service.solution-manager.svc.cluster.local

The similar approach can be followed to retrieve the complete service name for the Solution Manager as well.

Solution Manager Auto Registration

For auto registration during the start up, the first step is to configure the following values to point to the Solution Manager:

Parameter

Sample Value

Description

solution.proto

http/https

Solution Manager configuration protocol. With https, it is required to  import the correct certificates using the `tls.importCerts` parameter

solution.host

<hostname>

Solution Manager configuration host

solution.port

10090(by default)

Solution Manager configuration port

securityToken.proto

http/https

Denodo Security Token  protocol ("http" or "https") With https import the correct certificates using `tls.importCerts`

securityToken.host

<hostname>

Denodo Security Token Host

securityToken.port

<portnumber>

Denodo Security Token Port

The next step is to create the environment and cluster in the Solution Manager and configure them to be used to register the servers into them.

solution.register.environment

<environment name>

Environment name created into the Solution Manager where the servers will be registered

solution.register.cluster

<cluster name>

Cluster name created into the Solution Manager where the servers will be registered

solution.register.solutionManagerUser

<username>

Username to authenticate the register request against the Solution Manager. Uses the default username if it is not present

solution.register.solutionManagerPass

<password>

Password to authenticate the register request against the Solution Manager. Uses the default password if it is not present

solution.register.solutionManagerPassEncrypted

true/false

Indicates whether the solution.register.solutionManagerPass is encrypted or not. It is false, by default.

solution.register.fqdn

<hostname>

To use the FQDN obtained as hostname --fqdn as host into the server registration process.

Considerations to register the server properly

This depends on where the Solution Manager is located, internally or externally.

  • External:
  • Deployment.
  • Use a fixed hostname to override the host with the external hostname used to access the platform. For example the Service or an external name.
  • Internal in the same Kubernetes cluster:
  • StatefulSet.
  • solution.register.fqdn to true to obtain the FQDN that will be accessible from inside the Kubernetes cluster (In case of Solution Manager being launched in it).

Denodo Platform Architecture

The architecture below depicts the scenario that has been considered to deploy the Denodo Platform components inside the Kubernetes Cluster using Helm charts.

The diagram indicates that the Denodo Platform components, Virtual DataPort, Scheduler, Data Catalog and Design Studio are created in the Denodo Platform namespace under a Kubernetes Cluster. All components are deployed as independent pods with two replicas. The metadata has been configured to be stored in an external database. Though the metadata can be shared in a same database, different schema can be used to store its own metadata for each component,

Both components are accessible externally through their respective services. As seen earlier, few secrets have been created inside the Denodo Platform namespace of the cluster.

Install the Denodo Platform Helm Chart

As seen in the Solution Manager installation section, the command helm install is used to install a helm chart in a cluster and there are multiple ways of using this command. The install argument could be a chart reference, a path to a packaged chart, a path to an unpacked chart directory, a URL or an OCI registry.

Syntax:  helm install [NAME] [CHART] [flags]

For instance, in order to install the Denodo Platform helm chart located in the OCI registry(harbor) in a namespace, the below command template has to be executed:

$ helm install <release-name> oci://harbor.open.denodo.com/denodo-8.0/charts/denodo-platform --version <version> --values <custom-values.yaml> --namespace <namespace>

<release name>: arbitrary name for this release.

<version> : version constraint for the chart version to use.

<custom-values.yaml> : yaml file with customized/overridden values for parameters that need to be considered while installing the chart.

<namespace> : namespace where the chart has to be installed.

Custom Values

In the above example, the “<custom-values.yaml>” file has been passed as an argument which indicates that some of the parameters of the manifest have to be overridden to execute the use case.  

NOTE:  A complete sample custom values yaml specifications file (denodo-platform-custom-values.yaml) is available to download in the Appendix: Sample YAML files section of this document.

The table below shows how to create a <custom-values.yaml> file and with the description of the different parameters that can be configured:

Parameter

Sample Value

Description

Image

image.pullSecrets

[harbor-registry-secret]

Specify docker-registry secret names as an array. Use the name created in the section Image Configuration

Virtual DataPort Configuration for an independent deployment

denodo.auth.username

<admin username>

Name of the custom admin user to be created

denodo.auth.password

<password>

Password for the admin user

denodo.auth.passwordEncrypted

true/false

Specifies if the password for the custom admin user is encrypted

denodo.replicaCount

2

Increase this value to launch more replicas

denodo.resources

resources:

  limits:

    cpu: 8000m

    memory: 16000Mi

  requests:

    cpu: 8000m

    memory: 16000Mi

Specifies resource limits and requests. Mandatory for license restrictions and better resources allocation. Values can be adjusted as appropriately, but the limits must be equal to the requests to allocate the Pod as Guaranteed quality of service (QoS) class.

denodo.database.enabled

true

To set up external database metadata configuration

denodo.database.provider

“postgresql”

Specify the database provider. Possible values are derby, mysql, oracle, postgresql, sqlserver, azure

denodo.database.providerVersion

“12”

Specify the database provider version(optional)

denodo.database.uri

"jdbc:postgresql://postgres-service.default.svc.cluster.local:5432/vdp"

Specify the database uri

denodo.database.driver

"org.postgresql.Driver"

Specify the database driver

denodo.database.classpath

“postgresql-12”

Specify the path for the folder that contains the JAR files with the implementation classes needed by the JDBC adapter

denodo.database.user

<username>

Specify the database user name

denodo.database.password

<encrypted password>

Specify the database password encrypted with <DENODO_HOME>bin/encrypt_password.sh  <clear_password>

denodo.database.passwordEncrypted

“true”

Indicates if the password is encrypted

denodo.database.catalog

Database custom catalog.

denodo.database.schema

“vdp”

Database custome schema

service.type

ClusterIP

Kubernetes Service Type (ClusterIP, LoadBalancer, NodePort) for Denodo

Design Studio Configuration for an independent deployment

designStudio.enabled

true

To deploy an independent Design Studio workload

designStudio.replicaCount

2

Increase this value to launch more replicas

designStudio.service.type

ClusterIP

Kubernetes Service Type (ClusterIP, LoadBalancer, NodePort) for Denodo

designStudio.auth.password

<password>

To change the password for the local user in Design Studio. I.e web local login (http://localhost:9090/denodo-design-studio/#/web-local-login)

designStudio.auth.passwordEncrypted

true/false

Boolean value that specifies if the password for the local user in Design Studio is encrypted using the script:

`<DENODO_HOME>/bin/encrypt_password.sh.  Default is false

Data Catalog Configuration for an independent deployment

datacatalog.enabled

true

Enable to use independent workload for the Data Catalog

datacatalog.replicaCount

2

Increase the value to launch more replicas of Data Catalog

datacatalog.resources

<Similar to Virtual DataPort configuration defined above>

Specifies resource limits and requests. Mandatory for license restrictions and better resources allocation.  Values can be adjusted as appropriately, but the limits must be equal to the requests to allocate the Pod as Guaranteed quality of service (QoS) class.

dataCatalog.service.type

ClusterIP

Kubernetes Service Type (ClusterIP, LoadBalancer, NodePort) for Scheduler

dataCatalog.auth.password

<password>

To change the password for the local user in Data Catalog. I.e web local login (http://localhost:9090/denodo-data-catalog/#/web-local-login)

dataCatalog.auth.passwordEncrypted

true/false

Boolean value that specifies if the password for the local user in the Data Catalog is encrypted using the script:

`<DENODO_HOME>/bin/encrypt_password.sh.  Default is false

dataCatalog.server.name

<server name>

Specifies the default server name

dataCatalog.server.host

hostname of the service that points to the Virtual DataPort

Specifies the default server host.

dataCatalog.server.port

9999(default)

Specifies the default server name

dataCatalog.database.*

 

*indicates all parameters under database

Indicates the database values to configure the external metadata database. For example,

dataCatalog.database.provider is provided in the next row. Similarly other values related to database setting needs to be configured

dataCatalog.database.provider

“postgresql”

Specifies the database provider. For example: derby, mysql, oracle, postgresql, sqlserver, azure

libExtensions.copy.jdbcDrivers

“postgresql-12”

Specify the folder into <DENODO_HOME>/lib/extensions/jdbc-drivers to be copied into /opt/denodo/lib/data-catalog-extensions. Some valid values are: “oracle-21c”, “postgresql-12”, “mssql-jdbc-9.x”, ...

Scheduler configuration for an independent deployment.

scheduler.enabled

true

Enable to use independent workload for the Scheduler

scheduler.replicaCount

2

Increase the value to launch more replicas of Scheduler

scheduler.resources

<Similar to Virtual DataPort configuration defined above>

Specifies resource limits and requests. Mandatory for license restrictions and better resources allocation. Values can be adjusted as appropriately, but the limits must be equal to the requests to allocate the Pod as Guaranteed quality of service (QoS) class.

scheduler.service.type

ClusterIP

Kubernetes Service Type (ClusterIP, LoadBalancer, NodePort) for Scheduler

scheduler.auth.password

<password>

To change the password for the local user in Scheduler. I.e web local login ( http://localhost:9090/webadmin/denodo-scheduler-admin/?auth=login/#/local-login)

scheduler.auth.passwordEncrypted

true/false

Boolean value that specifies if the password for the local user in the Scheduler is encrypted using the script:

`<DENODO_HOME>/bin/encrypt_password.sh.  Default is false

scheduler.authIndex.password

<password>

To change the password for the local user in the Scheduler Index.

scheduler.authIndex.passwordEncrypted

true/false

Boolean value that specifies if the password for the local user in the Scheduler Index is encrypted using the script:

`<DENODO_HOME>/bin/encrypt_password.sh.  Default is false

scheduler.authWeb.password

<password>

To change the password for the local user in Scheduler web admin. For example to login in the Web Administration tool:

http://localhost:9090/webadmin/denodo-scheduler-admin/#/web-local-login

scheduler.authWeb.passwordEncrypted

true/false

Boolean value that specifies if the password for the local user in Scheduler web admin is encrypted using the script:

`<DENODO_HOME>/bin/encrypt_password.sh.  Default is false

scheduler.server.name

<server name>

Specifies the default server name

scheduler.server.host

hostname of the service that points to the Virtual DataPort

Specifies the default server host.

scheduler.server.port

9999(default)

Specifies the default server name

scheduler.database.*

 

*indicates all parameters under database

Indicates the database values to configure the external metadata database for Scheduler. For example,

scheduler.database.provider is provided in the next row. Similarly other values related to database setting needs to be configured

scheduler.database.provider

“postgresql”

Specifies the database provider. one of the supported engines: derby, mysql, oracle, postgresql, sqlserver, azure, aurora mysql or aurora postgresql

libExtensions.copy.jdbcDrivers

“postgresql-12”

Specify the folder into <DENODO_HOME>/lib/extensions/jdbc-drivers to be copied into /opt/denodo/lib/scheduler-extensions. Some valid values are: “oracle-21c”, “postgresql-12”, “mssql-jdbc-9.x”, ...

Note: Ingress Configuration is optional and it depends on the final implementation class for it nginx, traefik, etc … The access to the solution might be done directly through the services.

Execute Denodo Platform Helm Chart

Upon performing the above prerequisites and initial steps , the next step is to install the Helm Chart to deploy the Denodo components(VDP, Scheduler, Design Studio and Data Catalog) in the Kubernetes cluster using the customized values(denodo-platform-custom-values.yaml) for a few parameters pertaining to this use case.

Step 1: Execute the following command in the CLI

$ helm install denodoplatform oci://harbor.open.denodo.com/denodo-8.0/charts/denodo-platform --version 1.0.2 --values denodo-platform-custom-values.yaml --namespace denodo-platform

The above command deploys the Denodo Platform helm chart located in the harbor registry(OCI) in the namespace called “denodo-platform” using the overridden values provided in the “denodo-platform-custom-values.yaml” for parameters related to TLS and external metadata database configuration and names this release as “denodoplatform”.

Step 2: Execute the following command to list the pods created in the Denodo Platform namespace.

$ kubectl get pods --namespace denodo-platform

Step 3: Execute the following command to list the services created in the Denodo Platform namespace.

$ kubectl get services --namespace denodo-platform

Additional Information

AutoScaling

Auto-scaling can be achieved by means of the HorizontalPodAutoscaler resource and it can be configured for Denodo(VDP), Design Studio, Data Catalog and Scheduler. In order to launch an HorizontalPodAutoscaler resource, as a prerequisite, it is mandatory to specify the CPU limits and requests for the parameters(*.resources.limits and *.resources.requests) for the respective components

For example, in the case of an independent deployment of Data Catalog, the parameters dataCatalog.resources.limits and dataCatalog.resources.requests have to be filled in.

To launch the HorizontalPodAutoscaler resource, it is necessary to fill the values for the autoscaling related parameters for each component. For example, Denodo auto scaling parameters are explained below:

Parameter

Sample Value

Description

denodo.autoscaling.enabled

true

If it is enabled, then it overrides the denodo.replicaCount

denodo.autoscaling.minReplicas

1

Maximum number of replicas

denodo.autoscaling.maxReplicas

4

Minimum number of replicas

denodo.autoscaling.version

autoscaling/v2

API version of HorizontalPodAutoscaler

denodo.autoscaling.metrics

- type: Resource

  resource:

    name: cpu

    target:

      type: Utilization

      averageUtilization: 80

Scaling on this metrics definition in yaml

denodo.autoscaling.behavior

behavior:

 scaleDown:

   stabilizationWindowSeconds: 300

        policies:

          - type: Pods

            value: 1

            periodSeconds: 300

To configure separate scale-up and scale-down behaviors. You specify these behaviors by setting scaleUp and / or scaleDown under the behavior field.

Appendix: Sample YAML files

Sample custom values file to deploy the helm charts of Denodo Platform and Solution Manager have been attached here:

Solution Manager Custom Values YAML

solution-manager-custom-values.yaml

Denodo Platform Custom Values YAML

denodo-platform-custom-values.yaml

References

Installing Helm

Using Helm

Kubernetes Documentation

Denodo Docker Container Configuration