You can translate the question and the replies:

Failed setting up Kubernetes Persistent Volume with Denodo Yaml configuration

Dear Denodo Community, I found that the standard mount volumes described above are not persistent for us. When we add a jar library (e.g. JCO - Java Connector) to Denodo Design Studio, a restart is required, which terminates the Kubernetes Pods and restarts them. Even though I configured it as described above with a volume mounted to /opt/denodo/metadata, all configuration is lost. Therefore I tried to setup a Kubernetes PVC (persistent volume claim) which shall replace the standard volume. **pvc.yaml** ``` --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: denodo-pvc namespace: ns-denodo-int labels: app: denodo-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi ``` **denodo-deployment.yaml** ``` apiVersion: apps/v1 kind: Deployment metadata: name: denodo-deployment [...] initContainers: - name: init-volume image: registry.app.corpintra.net/denodo/denodo-platform:8.0-latest command: ["/bin/sh", "-c", 'if [ -z "$(ls -A /tmp)" ]; then cp -R /opt/denodo/metadata/* /tmp/; fi'] volumeMounts: - name: metadata-volume-pvc mountPath: /tmp containers: - name: denodo-container image: registry.app.corpintra.net/denodo/denodo-platform:8.0-latest command: ["/opt/denodo/tools/container/entrypoint.sh"] args: ["--vqlserver", "--vdpserver", "--designstudio", "--datacatalog"] [...] volumeMounts: - name: config-volume mountPath: /opt/denodo/conf/denodo.lic subPath: denodo.lic - name: metadata-volume-pvc mountPath: /opt/denodo/metadata/ volumes: - name: config-volume configMap: name: denodo-license - name: metadata-volume-pvc persistentVolumeClaim: claimName: denodo-pvc ``` Unlike with the standard volume from the configuration in the article above, using this Kubernetes pvc yaml configuration results in an empty folder /opt/denodo/metdata. I.e. initContainers is not able to copy the data to the persistent volume. Also the folder permissions of the mounted "/opt/denodo/metadata/" folder are only "**drwxr-xr-x**" (assigned to root) when using Kubernetes persistent volume. When using the standard volume as described in the article above, the folder permissions of "/opt/denodo/metadata/ are "drwxrwxrwx" (full access rights) and at least Denodo is starting and running fine (but no persistence after killing and restarting the pod). In the end I am getting log errors such as: ``` Caused by: java.sql.SQLException: Failed to create database '/opt/denodo/metadata/data-catalog-database', see the next exception for details. Could not get Connection for extracting meta-data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: Failed to create database '/opt/denodo/metadata/data-catalog-database', see the next exception for details. at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.onRefresh(ServletWebServerApplicationContext.java:163) ~[spring-boot-2.5.12.jar:2.5.12] org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta-data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: Failed to create database '/opt/denodo/metadata/data-catalog-database', see the next exception for details. ``` Maybe the problem is related to the folder permissions of the mounted persistent volume which is different from the standard Docker volume approach above?
user
10-05-2023 10:24:01 -0400
code

1 Answer

Hi, The ‘Failed to create database’ errors that you face are due to the empty metadata folder: ‘/opt/denodo/metadata’. I recommend you store your metadata in an external database and share across your pods. You should follow the steps in the [Storing the Metadata on an External Database](https://community.denodo.com/docs/html/browse/8.0/en/vdp/administration/server_configuration/storing_catalog_on_external_database/storing_catalog_on_external_database) document in the Virtual DataPort Administration Guide. Additionally, I recommend you ensure that your Virtual DataPort server is not only responsive, but ready to execute queries, meaning it is not in single user/promotion mode. In such a scenario I would use the ping script with the “-r” option in the readinessProbe and without it in the livenessProbe, for example: **readinessProbe:** exec: command: - /opt/denodo/bin/ping.sh - t 20000 - r //localhost:9999 **livenessProbe:** exec: command: - /opt/denodo/bin/ping.sh - t 20000 - //localhost:9999 In case you need further assistance and if you have a valid Support Account then you can reach out to Denodo Support by raising a support case on the [Denodo Support Site](https://support.denodo.com/) and the Support Team will assist you. Hope this helps! ****
Denodo Team
23-05-2023 11:59:16 -0400
code
You must sign in to add an answer. If you do not have an account, you can register here