You can translate the document:

Introduction

The Denodo Platform can be containerized to be run in a container platform such as Docker. The usage of containers eases the adoption of modern architectures, for instance, microservices, which can be orchestrated with Kubernetes.

This document explains how to deploy Denodo containers using Kubernetes.

Deployment

Kubernetes is an orchestration system for containers: it allows IT teams not only to manage the deployment of containerized applications but also to scale deployments by increasing the number of replicas for the deployment. Kubernetes also allows other actions, for instance, to update the current containers with its new version released.

This document will make use of the Kubernetes command-line tool (kubectl) that interacts with the Kubernetes cluster via the Kubernetes API. This article assumes that Kubernetes and Docker are already installed and working. It is also recommended to follow the Denodo Platform Container QuickStart Guide as a prerequisite for this article, as it serves to check that the Denodo container is working successfully in the Docker installation. The Denodo users that are new to Kubernetes can also check the Kubernetes Basics tutorial to learn the basic concepts and usage of Kubernetes. The “kubectl” command-line tool that communicates with Kubernetes to execute commands in the cluster.

NOTE: For users with Helm experience, note that Denodo also provides Helm charts which can be used to deploy the Denodo Platform following best practices; more information about this deployment method can be found in the Denodo Helm Charts Quick Start Guide page from the Knowledge Base.

Registering a server in the Solution Manager

In order to get the Denodo Platform servers to start and to deploy changes to these servers, the Denodo Platform servers must receive a valid license and be configured to use Denodo Security Tokens. To register the server in the Solution Manager a Virtual DataPort server element must be created:

Each Virtual DataPort instance in the Kubernetes cluster will reference this single element and request CPUs against the pool of CPUs available to the specific environment. Note that if the Data Catalog and Scheduler services are started, individual elements for those services should be created in the Solution Manager as well.

To configure deployments in Denodo Platform clusters, click on the Deployments tab of the associated environment. Make sure that the following options are set to “Yes”:

  • Enable deployments.
  • VDP Servers use a shared metadata database.

The Denodo Solution Manager can also be deployed in Kubernetes, for more information about this configuration please refer to how to deploy the Denodo Solution Manager in Kubernetes.

Configuring Kubernetes to pull Denodo images from the Harbor Repository

In order to pull Denodo containers directly from the Denodo Harbor repository, registry secrets must be configured so that the Kubernetes cluster can authenticate to the repository. In order to generate the correct API key, users should navigate to the Denodo Harbor Repository and log in with their Denodo Support Site credentials.

In order to create an API key, please click the username on the top right corner of the screen and select User Profile.


       

In the User Profile popup dialog box (shown below), click on the “GENERATE SECRET” button to create a CLI secret.

Note: A user can have only one CLI secret. If a new secret is generated, the old secret will be invalidated.

In order to create the registry secret that will be used later on in this tutorial, the following secret should be created:

$ kubectl create secret docker-registry harbor-registry-secret --docker-server=harbor.open.denodo.com --docker-username=xxxxxx --docker-password=xxxxx

Where “--docker-username” is the user’s Support Site account username, and the “--docker-password” is the CLI secret generated in Harbor.

Denodo Kubernetes service

The Denodo Platform versions 8.0 and higher introduce a new RMI implementation which simplifies the network configuration of the Virtual DataPort servers. The new RMI implementation requires only one port in order to establish a connection with the Virtual DataPort Administration Tool and JDBC clients; however, for JMX connections, the two standard RMI Registry and RMI Factory ports are still used.

The following YAML configuration file defines a Kubernetes Service and Deployment that will deploy the Denodo Platform 8.0 and higher containers generated by Denodo in a Kubernetes cluster:

apiVersion: v1

kind: Service

metadata:

  name: denodo-service

spec:

  selector:

    app: denodo-app

  ports:

  - name: svc-denodo

    protocol: "TCP"

    port: 9999

    targetPort: denodo-port

  - name: svc-rmi-r

    protocol: "TCP"

    port: 9997

    targetPort: jdbc-rmi-rgstry

  - name: svc-rmi-f

    protocol: "TCP"

    port: 9995

    targetPort: jdbc-rmi-fctory

  - name: svc-odbc

    protocol: "TCP"

    port: 9996

    targetPort: odbc

  - name: svc-web

    protocol: "TCP"

    port: 9090

    targetPort: web-container

  type: LoadBalancer

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: denodo-deployment

spec:

  selector:

    matchLabels:

      app: denodo-app

  replicas: 1

  template:

    metadata:

      labels:

        app: denodo-app

    spec:

      hostname: denodo-hostname

      imagePullSecrets:

        - name: harbor-registry-secret

      containers:

        - name: denodo-container

          image: harbor.open.denodo.com/denodo-<version>/images/denodo-platform:latest

          imagePullPolicy: IfNotPresent

          command:

            - /opt/denodo/tools/container/entrypoint.sh

          args:

            - --vqlserver

          ports:

            - name: denodo-port

              containerPort: 9999

            - name: jdbc-rmi-rgstry

              containerPort: 9997

            - name: jdbc-rmi-fctory

              containerPort: 9995

            - name: odbc

              containerPort: 9996

            - name: web-container

              containerPort: 9090

          env:

            - name: DENODO_LM_PROTO

              value: "http"

            - name: DENODO_LM_HOST

              value: "solution-manager-service"

            - name: DENODO_LM_PORT

              value: "10091"

            - name: DENODO_SSO_PROTO

              value: "http"

            - name: DENODO_SSO_HOST

              value: "solution-manager-service"

            - name: DENODO_SSO_PORT

              value: "19090"

            - name: DENODO_SSO_LOGIN_ENABLED

              value: "true"

          startupProbe:

            failureThreshold: 20

            initialDelaySeconds: 30

            periodSeconds: 20

            successThreshold: 1

            timeoutSeconds: 10

            tcpSocket:

              port: denodo-port

          readinessProbe:

            failureThreshold: 1

            initialDelaySeconds: 30

            periodSeconds: 5

            successThreshold: 1

            timeoutSeconds: 5

            exec:

              command:

                - /opt/denodo/bin/ping.sh

                - -t

                - "5000"

                - -r

                - "//localhost:9999"

          livenessProbe:

            failureThreshold: 6

            initialDelaySeconds: 50

            periodSeconds: 20

            successThreshold: 1

            timeoutSeconds: 10

            exec:

              command:

                - /opt/denodo/bin/ping.sh

                - -t

                - "10000"

                - "//localhost:9999"

denodo-service.yaml for Denodo 8.0

NOTE: The “<version> tag should be replaced with the image version that should be used; for example, “8.0”. Additionally, note that if the JDBC port of the Denodo Platform (9999) is modified, the liveness and readiness probes should be updated to point to the new port.

Regarding the environment variables declared in this file:

  • DENODO_LM_PROTO: This variable defines whether HTTP or HTTPS should be used in communication with the License Manager server. The values can be “https” or “http”, with “http” being the default value.
  • DENODO_LM_HOST: Hostname of the License Manager.
  • DENODO_LM_PORT: Port of the License Manager. Uses 10091 as default.
  • DENODO_SSO_PROTO: Same as “DENODO_LM_PROTO” but for accessing the Solution Manager’s web server container.
  • DENODO_SSO_HOST: Hostname of the Denodo Security Token server. This should be the hostname used to access the web container of the Solution Manager.
  • DENODO_SSO_PORT: Port of the Denodo Security Token. Empty by default. Note that the default port of the Solution Manager is 19090 with TLS disabled, and 19443 with TLS enabled. No port implies that the communication will be established over the default HTTP or HTTPS port (defined by “DENODO_SSO_PROTO”) which is 80 or 443 respectively.
  • DENODO_SSO_LOGIN_ENABLED: Defines whether users can use Solution Manager SSO credentials to access the Virtual DataPort server. Takes values of “true” or “false”, with “false” being the default.

More information about the environment variables available in the entrypoint script of the Denodo containers can be found in the Denodo Docker container configuration page.

To create both elements, service, and deployment, in the Kubernetes environment save the script to a file and name it to something like denodo-service.yaml. Then, execute the following Kubernetes command in a console:

> kubectl create -f denodo-service.yaml

Execution of the denodo-service.yaml

Although this article does not try to explain how Kubernetes works or how YAML files are created, it is interesting to outline the following from the YAML file definition:

  • The Denodo Platform exposes five ports: 9999, 9997, 9996, 9995 and 9090. The ports for JDBC (9999), ODBC (9996), and web services (9090) can be changed on the service to any port desired by the user. However, the RMI factory ports (ports 9997 and 9995 in the example) must match between the service and the containers due to how RMI connections on the registry port are redirected to the factory port (see The RMI Protocol page in the Knowledge Base for more information).

    Note also that JMX connections should
    not be made through the load balancer, since it is important to know the server being monitored. Instead, use a sidecar container running the Denodo Monitor or another client that can consume JMX.
  • All the Denodo pods will share the hostname denodo-hostname, as this is the selected solution to configure RMI appropriately with the standard Denodo container. The RMI configuration will be made automatically by the startup script that is included in the container using the hostname. Notice that the field hostname cannot include dots, but If needed, Kubernetes also provides a subdomain field so you can assign a FQDN to the pod.

Testing

After deploying the Denodo service and deployment in Kubernetes, connections can be made to the Denodo Platform. In order to do this, the service must be inspected to determine how external access is allowed to the platform.

The following command will provide all the information required regarding the Kubernetes Service that is deployed:

> kubectl describe service denodo-service

Name:                     denodo-service

Namespace:                default

Labels:                   <none>

Annotations:              <none>

Selector:                 app=denodo-app

Type:                     LoadBalancer

IP Family Policy:         SingleStack

IP Families:              IPv4

IP:                       10.43.173.79

IPs:                      10.43.173.79

LoadBalancer Ingress:     172.31.110.158

Port:                     svc-denodo  9999/TCP

TargetPort:               denodo-port/TCP

NodePort:                 svc-denodo  31366/TCP

Endpoints:                10.42.0.205:9999

Port:                     svc-rmi-r  9997/TCP

TargetPort:               jdbc-rmi-rgstry/TCP

NodePort:                 svc-rmi-r  30076/TCP

Endpoints:                10.42.0.205:9997

Port:                     svc-rmi-f  9995/TCP

TargetPort:               jdbc-rmi-fctory/TCP

NodePort:                 svc-rmi-f  31515/TCP

Endpoints:                10.42.0.205:9995

Port:                     svc-odbc  9996/TCP

TargetPort:               odbc/TCP

NodePort:                 svc-odbc  32182/TCP

Endpoints:                10.42.0.205:9996

Port:                     svc-web  9090/TCP

TargetPort:               web-container/TCP

NodePort:                 svc-web  30611/TCP

Endpoints:                10.42.0.205:9090

Session Affinity:         None

External Traffic Policy:  Cluster

Events:

  Type    Reason                Age   From                Message

  ----    ------                ----  ----                -------

  Normal  EnsuringLoadBalancer  14m   service-controller  Ensuring load balancer

  Normal  AppliedDaemonSet      14m   service-controller  Applied LoadBalancer DaemonSet kube-system/svclb-denodo-service-1a228fb2

  Normal  UpdatedLoadBalancer   14m   service-controller  Updated LoadBalancer with new IPs: [] -> [172.31.110.158]

kubectl describe denodo-service output

In this example, the LoadBalancer service provided by Kubernetes is used which publishes an external endpoint allowing connections to the service. The public IP address here is given by the “LoadBalancer Ingress:” attribute; this may vary depending on the type of service used as these can vary by cloud provider or Kubernetes cluster. Please ensure that this endpoint is reachable from the Solution Manager.

After understanding the available endpoint, the Solution Manager can be configured to connect to the Denodo Platform. This is done by opening the cluster configuration and updating the “VDP Server Load Balancer URL” to point to the public “denodo-service” endpoint:

Configuring a VDP Server Load Balancer URL in a cluster of the Solution Manager

Where “denodo-service” should resolve to the public IP address shown in the “kubectl describe service denodo-service” command.

Once this is configured, the Solution Manager’s home menu can be opened and the “Design Studio” button can be selected, which will connect to the Denodo Platform cluster:

The Design Studio menu after connecting to the Denodo Platform running in Kubernetes

Persisting Metadata

It is recommended to configure an external metadata database (see Storing the Metadata on an External Database) to store and share the metadata of the Virtual DataPort server between instances. This simplifies the management of the deployment and persists the metadata between containers automatically. This can be configured using the following environment variables:

  • DENODO_DATABASE_PROVIDER: Adapter name (For example: derby, mysql, oracle, postgresql, sqlserver, azure).
  • DENODO_DATABASE_PROVIDER_VERSION: The version of the above database.
  • DENODO_DATABASE_URI: Database access URI.
  • DENODO_DATABASE_DRIVER: Name of the Java class of the JDBC adapter to be used.
  • DENODO_DATABASE_CLASSPATH: Path for the folder that contains the JAR files with the implementation classes needed by the JDBC adapter.
  • DENODO_DATABASE_USER: Database user name.
  • DENODO_DATABASE_PASSWORD: If it is not encrypted it will be encrypted using the script: <DENODO_HOME>/bin/encrypt_password.sh
  • DENODO_DATABASE_PASSWORD_ENCRYPTED: Uses false as default.

Note that more information about the environment variables available in the entrypoint script of the Denodo containers can be found in the Denodo Docker container configuration page.

It is generally easiest to configure these parameters with a working JDBC database connection in Denodo. If the Denodo Platform is started in Kubernetes first without an external metadata database, a connection can be configured to the backend database by selecting “File > New > Data Source” and choosing the correct database type.

After configuring the database with the correct connection parameters (so that “
Test Connection” returns a successful result), the parameters to add in the above environment variables will correspond to the properties in the “VQL” tab of the data source (this is necessary since every database will have different connection parameters):

The “VQL” tab of a working data source in the Design Studio

For example, the “DENODO_DATABASE_DRIVER” environment variable value can be taken from the “DRIVERCLASSNAME” attribute above. Note that the encrypted password cannot be copied directly.

Deployments

Deployments are possible from the Solution Manager in Kubernetes clusters without downtime using an external metadata database (for more information about this configuration, see Storing the Metadata on an External Database).

The process that the Solution Manager performs to do this is the following:

  1. The Solution Manager connects to a Virtual DataPort server in the cluster and executes “ENTER PROMOTION MODE”. This is a special mode for the Solution Manager similar to SINGLE USER MODE but for revisions; this locks the server so that only the Solution Manager is making changes and allows for a rollback in case of a failure in the promotions process.
  2. At this point, the Virtual DataPort server will start to fail liveness checks in the cluster and will be removed from the service pool. However, since the readiness check has not failed it will continue running.
  3. Any remaining queries will complete in the locked server, with new queries being routed to other nodes in the cluster.
  4. Once all the other queries complete, the Solution Manager will execute the revision and open a transaction in the backend database to make metadata changes.
  5. If an error is thrown during the promotion, the transaction will be rolled back, PROMOTION MODE will end, and the server will be unlocked and returned to the cluster.
  6. If the promotion succeeds, the transaction will be committed and all other servers will immediately start to operate with the new metadata, and the locked server will be unlocked and returned to the load balancing pool.

Clean Up

To shut down the Denodo Platform cluster, the following command can be executed:

> kubectl delete -f denodo-service.yaml

Finally, in case that it is needed to clean the Kubernetes resources, the following command can be executed to delete the secret related to the registry credentials:

> kubectl delete secret harbor-registry-secret

Deploying with an Evaluation or Standalone License

License configmap

The Denodo Platform requires a valid license in order to start, so in order to load this license a ConfigMap can be used to load the file into the container. The following statement creates the ConfigMap with the contents of the license file that will be referenced later from the denodo-service.yaml file:

$ kubectl create configmap denodo-license --from-file=denodo.lic=<pathToLicenseFile>

In order to have the containers reference the created ConfigMap, the YAML file must also be modified.

To disable use of the Solution Manager, the following environment variables should be removed:

            - name: DENODO_LM_PROTO

              value: "http"

            - name: DENODO_LM_HOST

              value: "solution-manager-service"

            - name: DENODO_LM_PORT

              value: "10091"

            - name: DENODO_SSO_PROTO

              value: "http"

            - name: DENODO_SSO_HOST

              value: "solution-manager-service"

            - name: DENODO_SSO_PORT

              value: "19090"

            - name: DENODO_SSO_LOGIN_ENABLED

              value: "true"

And a volume mount should be added at the bottom of the YAML file to mount the previously configured ConfigMap into the container.

          volumeMounts:

            - name: config-volume

              mountPath: /opt/denodo/conf/denodo.lic

              subPath: denodo.lic

      volumes:

        - name: config-volume

          configMap:

            name: denodo-license

If the license ConfigMap must be removed, it can be removed with the following command:

> kubectl delete configmap denodo-license

Troubleshooting

Sometimes we can face issues when launching the pods, for instance, if the Virtual DataPort server cannot start because there is a misconfiguration with the license.

The following list includes some common errors that are faced when working with Denodo and Kubernetes:

  • CrashLoopBackOff: this error appears when the pod is unable to allocate the resources necessary to start it; this can sometimes be caused by missing ConfigMaps. In order to check what is happening, the simplest solution is to use the parameter terminationMessagePath explained below to get more information about the issue.
  • ErrImagePull: in this case Kubernetes cannot pull the image due to network issues or an incorrect configuration of the registry credentials. Check the image pull policy defined in the YAML is the correct one.
  • ImagePullBackOff: another problem related with the pull operation of the images, we can see it when Kubernetes cannot find the image in the local or public registry. Many times this problem appears because the Denodo Docker image has not been loaded locally with a docker load so Kubernetes cannot run the container.
  • RunContainerError: usually obtained when deploying a YAML file with configuration errors, executing a kubectl describe pod will help diagnosing the issue.

To debug runtime errors, the entrypoint script configures all of the Denodo log files to be routed to standard output. This means that the logging generated by the container can be viewed by running the following command:

> kubectl logs -f <denodo-deployment-pod>

Note that the “-f” option above follows the output of the container and continues returning logs to the terminal.

References

Denodo Platform Container QuickStart Guide

Kubernetes Documentation

Learn Kubernetes Basics

Kubernetes on Docker

Disclaimer
The information provided in the Denodo Knowledge Base is intended to assist our users in advanced uses of Denodo. Please note that the results from the application of processes and configurations detailed in these documents may vary depending on your specific environment. Use them at your own discretion.
For an official guide of supported features, please refer to the User Manuals. For questions on critical systems or complex environments we recommend you to contact your Denodo Customer Success Manager.

Questions

Ask a question

You must sign in to ask a question. If you do not have an account, you can register here