Troubleshooting Deployment¶
This section provides information on how to resolve the most common problems during the deployment of the Denodo Embedded MPP.
To identify and troubleshoot common errors during the deployment of the Denodo Embedded MPP and its use from Denodo Virtual DataPort see Denodo Embedded MPP Troubleshooting.
Cannot list resource “pods” in API group “” in the namespace “kube-system”
- Cause
Helm v2 is being used.
- Solution
Upgrade to Helm v3.
kubectl get pods shows a pod with status ErrImagePull or ImagePullBackOff
- Cause
This error occurs when it has not been possible to retrieve the image from the container image registry.
- Solution
Use
kubectl describe pod <pod name>command and check theEventssection for errors related to image pulling. Here are some of the most frequent errors you might encounter and how to fix them:Authorization failed: It indicates that Kubernetes does not have credentials to access the container image. To resolve this, provide the credentials of your container image registry by settingpullSecretorpullCredentialsinvalues.yaml. If you are usingpullCredentials, make sureenabledis set totrue. For detailed instructions on how to configure these credentials, check the Container Image Registry Credentials section.Denodo Harbor Credentials expired: If the credentials are right and the container image registry is Denodo Harbor, remember the credentials expire every 6 months. Log in to Denodo Harbor using the same user configured on the
image.repositorysection of thevalues.yamlto refresh them, then try pulling the image again.read tcp hostA:portA -> hostB:portB: read: connection reset by peer: If you get this error and the container image registry is Denodo Harbor, it means the firewall is blocking the connection. You need to enable both the Denodo Harbor Registry and its associated S3 endpoints on the firewall. This is because Denodo Harbor Registry uses S3 for its storage backend.
kubectl get pods shows PostgreSQL pod in Pending status
- Cause
This error occurs when it has not been possible to scheduled the pod onto a node.
- Solution
Use
kubectl describe pod <pod name>command and check theEventssection for errors related to pod scheduling.One common error is that the cluster is deployed in Amazon EKS with Kubernetes version >= 1.23 and the Amazon EBS CSI driver add-on is not installed. To resolve this install the EKS add-on
Amazon EBS CSI driverand attach the policyAmazonEBSCSIDriverPolicyto:the worker node’s roles for the cluster
to the cluster’s ServiceRole.
kubect get pods shows Hive metastore pod in CrashLoopBackOff status
- Cause
This error occurs when a pod fails to start for some reason and repeatedly crashes.
- Solution
Use
kubectl logs <hive-metastore pod> --previouscommand and check if the log contains this error:hive-metastore crash in FIPS mode¶Caused by: java.security.KeyStoreException: jceks not found ... java.security.NoSuchAlgorithmException: jceks KeyStore not available
This mean it is a Kubernetes cluster running in FIPS mode (Federal Information Processing Standard). Please contact the Denodo Support Team to assist you on this issue.
