The presto.autoscaling section of the values.yaml configures the horizontal autoscaling of the Denodo Embedded MPP. Horizontal scaling means that the response to an increase in load is to deploy more pod replicas. If the load decreases, and the number of pods is greater than the configured minimum, HorizontalPodAutoscaler (HPA) instructs the workload resource to scale down. It is disabled by default:

horizontal autoscaling
    enabled: false
    maxReplicas: 20
    targetCPUUtilizationPercentage: 80

There are two requirements to ensure the HorizontalPodAutoscaler (HPA) works:

  1. A monitoring tool is installed for providing metrics to Kubernetes: Metrics Server, Prometheus, etc.

  2. The Worker has CPU resource requests, presto.worker.resources, defined in the values.yaml file. If the resources requests are not configured the autoscaler (HPA) will not take any action.

    Note, that resources are not configured by default, as we leave this configuration as a choice for the Kubernetes cluster administrator. So for example, consider presto.worker.resources.requests.cpu. This setting represents the amount of CPU reserved for the pod. We recommend setting the value to be 80% of the total number of CPUs of a Worker as a starting point. So for 32 core nodes you can set the value to 25.6 or 25600m, each of which represents 25.6 cores:

MPP Worker resources
        cpu: 25600m

This can then be adjusted as needed. The autoscaler will use this value along with the targetCPUUtilizationPercentage to determine if a new Worker is needed.

You can check the current status of the HorizontalPodAutoscaler running:

kubectl get hpa

presto-worker Deployment/presto-worker 22%/80%    2        6       2

It is strongly recommended to use pod-based autoscaling together with node-based autoscaling to coordinate scalability of the pods with the behavior of the nodes in the cluster, since our recommendation is to use one single node for each MPP worker. This way, when you need to scale up, the cluster autoscaler can add new nodes, and when scaling down, it can shut down unneeded nodes.

Add feedback