USER MANUALS


Troubleshooting Deployment

This section provides information on how to resolve the most common problems during the deployment of the Denodo Embedded MPP.

To identify and troubleshoot common errors during the deployment of the Denodo Embedded MPP and its use from Denodo Virtual DataPort see Denodo Embedded MPP Troubleshooting.


Cannot list resource “pods” in API group “” in the namespace “kube-system”

Cause

Helm v2 is being used.

Solution

Upgrade to Helm v3.

kubectl get pods shows a pod with status ErrImagePull or ImagePullBackOff

Cause

This error occurs when it has not been possible to retrieve the image from the container image registry.

Solution

Use kubectl describe pod <pod name> command and check the Events section for errors related to image pulling.

One of the most common error is Authorization failed. It means that Kubernetes does not have credentials to access the image. To resolve this add the credentials of your container image registry by setting pullSecret or pullCredentials in the values.yaml. If you use pullCredentials make sure enabled is set to true.

If the credentials are right and the container image registry is Denodo Harbor, whose credentials expire every 6 months, log in to Denodo Harbor using the same user configured on the image.repository section of the values.yaml to refresh those credentials.

For more information on how to configure the container image registry credentials see Container Image Registry Credentials section.

kubectl get pods shows PostgreSQL pod in Pending status

Cause

This error occurs when it has not been possible to scheduled the pod onto a node.

Solution

Use kubectl describe pod <pod name> command and check the Events section for errors related to pod scheduling.

One common error is that the cluster is deployed in Amazon EKS with Kubernetes version >= 1.23 and the Amazon EBS CSI driver add-on is not installed. To resolve this install the EKS add-on Amazon EBS CSI driver and attach the policy AmazonEBSCSIDriverPolicy to:

  • the worker node’s roles for the cluster

  • to the cluster’s ServiceRole.

kubect get pods shows Hive metastore pod in CrashLoopBackOff status

Cause

This error occurs when a pod fails to start for some reason and repeatedly crashes.

Solution

Use kubectl logs <hive-metastore pod> --previous command and check if the log contains this error:

hive-metastore crash in FIPS mode
 Caused by: java.security.KeyStoreException: jceks not found
 ...
 java.security.NoSuchAlgorithmException: jceks KeyStore not available

This mean it is a Kubernetes cluster running in FIPS mode (Federal Information Processing Standard). Please contact the Denodo Support Team to assist you on this issue.

Add feedback