Deploying Denodo in Kubernetes

Applies to: Denodo 7.0
Last modified on: 02 May 2019
Tags: Kubernetes Cloud Administration Cluster configuration Docker

Download document

You can translate the document:

Goal

The Denodo Platform can be containerized to be run in a container platform such as Docker. The usage of containers eases the adoption of modern architectures, for instance, microservices, which can be orchestrated with Kubernetes.

This document explains how to deploy Denodo containers using Kubernetes.

Content

Kubernetes is an orchestration system for containers, it allows the IT teams not only to manage the deployment of containerized applications but also to scale deployments by increasing the number of replicas for the deployment. Kubernetes allows also to perform other actions, for instance, to update the current containers with its new version released.

This document will make use of the Kubernetes command line interface (kubectl) that interacts with the Kubernetes cluster via the Kubernetes API. This article assumes that Kubernetes and Docker are already installed and working and that Kubernetes is enabled on Docker, this runs a single-node cluster when Docker is started. It is also recommended to follow the Denodo Platform Container QuickStart Guide as a prerequisite for this article, as it serves to check that the Denodo container is working successfully in the Docker installation. The Denodo users that are new to Kubernetes can also check the Kubernetes Basics tutorial to learn the basic concepts and usage of Kubernetes.

Kubectl is the command line client that communicates with Kubernetes to execute commands in the cluster. The following YAML configuration file defines a Kubernetes Service and Deployment that will deploy our Denodo Platform container in the Kubernetes cluster:

apiVersion: v1

kind: Service

metadata:

  name: denodo-service

spec:

  selector:

    app: denodo-app

  ports:

  - name: svc-rmi-r

    protocol: "TCP"

    port: 8999

    targetPort: jdbc-rmi-rgstry

  - name: svc-rmi-f

    protocol: "TCP"

    port: 8997

    targetPort: jdbc-rmi-fctory

  - name: svc-odbc

    protocol: "TCP"

    port: 8996

    targetPort: odbc

  - name: svc-web

    protocol: "TCP"

    port: 8090

    targetPort: web-container

  type: LoadBalancer

---

apiVersion: apps/v1beta1

kind: Deployment

metadata:

  name: denodo-deployment

spec:

  selector:

    matchLabels:

      app: denodo-app

  replicas: 1

  template:

    metadata:

      labels:

        app: denodo-app

    spec:

      hostname: denodo-hostname

      containers:

      - name: denodo-container

        image: denodo-platform:7.0-latest

        command: ["./denodo-container-start.sh"]

        args: ["--vqlserver"]

        env:

        - name: FACTORY_PORT

          value: "8997"

        ports:

        - name: jdbc-rmi-rgstry

          containerPort: 9999

        - name: jdbc-rmi-fctory

          containerPort: 8997

        - name: odbc

          containerPort: 9996

        - name: web-container

          containerPort: 9090

        volumeMounts:

        - name: standalone-license

          mountPath: /opt/denodo/conf/denodo.lic

          readOnly: true

      volumes:

      - name: standalone-license

        hostPath:

          path: /C/Denodo/Licenses/denodo_7_0.lic

denodo-services.yaml

To create both elements in the Kubernetes environment save the script to a file and name it to something like denodo-service.yaml. Then, in a console execute the following Kubernetes command:

> kubectl create -f denodo_service.yaml

Execution of the denodo-services.yaml

Although this article does not try to explain how Kubernetes works or how YAML files are created, it is interesting to outline the following from the YAML file definition:

  • The service exposes four ports: 8999, 8997, 8996 and 8090. These are the ports published in the service that are mapped to ports in the container. This mapping allows the use of the port 9090 in the container but publishes it as 8090 in the service. This is also done with other ports, however, the RMI factory port, in the example 8997, has to be the same in both the Pod and the Service because due to the internal workings of RMI connections this port cannot be mapped. If the RMI factory port has to be changed, it must be done at the Pod level (with appropriate container options).
  • The configuration of the YAML file is assuming that a standalone license (like evaluation or denodo express licenses) is used, and so the host license file C:/Denodo/Licenses/denodo_7_0.lic is mapped to the container /opt/denodo/denodo.lic file to be used as the Pod license file. Please update the host path to point to your actual license file or read the following sections for instructions on how to use a Solution Manager licensing.
  • All the Denodo pods will share the hostname denodo-hostname, as this is the selected solution to configure RMI appropriately with the standard Denodo container. The RMI configuration will be made automatically by the startup script that is included in the container using the hostname.

After the Service deployment is completed, it will be possible to connect to the new Denodo instance from a Virtual DataPort client such as the Virtual DataPort Administration Tool with the following URL:

//denodo-hostname:8999/admin

In addition, notice that for using denodo-hostname from your client computers, it will be necessary that the client computer is able to resolve that hostname to the IP address of the denodo-service, either by defining the denodo-hostname in the DNS server or in the hosts file of the client computer. Please, ensure that you have network connectivity from the client computer to the Kubernetes Service, by configuring the network routes appropriately.

Connecting to the Virtual DataPort Server within the Kubernetes cluster

The following command will provide all the information required regarding the Kubernetes Service that is deployed:

> kubectl describe service denodo-service

Name:                     denodo-service

Namespace:                default

Labels:                   <none>

Annotations:              <none>

Selector:                 app=denodo-app

Type:                     LoadBalancer

IP:                       10.107.241.122

LoadBalancer Ingress:     localhost

Port:                     svc-rmi-r  8999/TCP

TargetPort:               jdbc-rmi-rgstry/TCP

NodePort:                 svc-rmi-r  31604/TCP

Endpoints:                10.1.0.34:9999

Port:                     svc-rmi-f  8997/TCP

TargetPort:               jdbc-rmi-fctory/TCP

NodePort:                 svc-rmi-f  30033/TCP

Endpoints:                10.1.0.34:8997

Port:                     svc-odbc  8996/TCP

TargetPort:               odbc/TCP

NodePort:                 svc-odbc  30049/TCP

Endpoints:                10.1.0.34:9996

Port:                     svc-web  8090/TCP

TargetPort:               web-container/TCP

NodePort:                 svc-web  31911/TCP

Endpoints:                10.1.0.34:9090

Session Affinity:         None

External Traffic Policy:  Cluster

Events:                   <none>

kubectl describe denodo-service output

Finally, in case that it is needed to clean the environment, the following command can be executed to delete the Denodo service deployed in Kubernetes.

> kubectl delete -f denodo-service.yaml

Licenses Managed by a Denodo Solution Manager

The Denodo Solution Manager helps large organizations to manage the Denodo deployments in an easy way. The script provided above to create the Kubernetes deployment make use of a standalone license. However, it is possible to apply some changes to it in order to retrieve the license information from a License Server instead of pointing to a license file. To do this, just make the following replacements in the script file:

Replace the last part of the file:

        volumeMounts:

        - name: standalone-license

          mountPath: /opt/denodo/conf/denodo.lic

          readOnly: true

      volumes:

      - name: standalone-license

        hostPath:

          path: /C/Denodo/Licenses/denodo_7_0.lic

with

        volumeMounts:

        - name: solution-manager-conf

          mountPath: /opt/denodo/conf/SolutionManager.properties

          readOnly: true

      volumes:

      - name: solution-manager-conf

        hostPath:

          path: /C/Denodo/SolutionManager.properties

The Solution Manager configuration file SolutionManager.properties should point to the Solution Manager server, with a configuration like the one shown here:

# License Manager Configuration

com.denodo.license.host=solutionmanager-hostname

com.denodo.license.port=10091

The hostname solutionmanager-hostname points to the Solution Manager server and it has to be resolved by the pods. Also, in the Solution Manager side, it will be required to create the Kubernetes environment with the Virtual DataPort servers setting the Host parameter as denodo-hostname.

References

Denodo Platform Container QuickStart Guide

Kubernetes Documentation

Learn Kubernetes Basics

Kubernetes on Docker

Questions

Ask a question
You must sign in to ask a question. If you do not have an account, you can register here

Featured content

DENODO TRAINING

Ready for more? Great! We offer a comprehensive set of training courses, taught by our technical instructors in small, private groups for getting a full, in-depth guided training in the usage of the Denodo Platform. Check out our training courses.

Training