Configuring Autoscaling of Denodo in AWS

Applies to: Denodo 7.0
Last modified on: 10 Sep 2019
Tags: Cloud Solution Manager AWS Cluster configuration Administration

Download document

Introduction

Solution Manager provides, among other features, a flexible way to provide a license to the different Denodo Platform servers.

In cloud environments (like AWS), a typical use case is to use auto scaling capabilities in order to allow client applications to increase / decrease servers capacity according to a different set of rules (time intervals, cpu / memory use…). When a new instance is launched, the servers that are automatically launched in startup scripts, need to be registered in the Solution Manager in order to get a working license. In this document we will show step by step how to automatically register/deregister nodes in the Solution Manager when using auto scaling in AWS.

Overview

The following image illustrates the auto scaling lifecycle in AWS when instances are dynamically launched or stopped:

It is possible to add a lifecycle hook when an instance is launched or terminated, in order to execute an operation / execute some code using a lambda function.

Scale-out process needs

When a new instance is launched it is necessary to register the new server in the Solution Manager catalog, so the Virtual DataPort server can get a working license and start correctly.

Ideally, the servers could register themselves at startup time to avoid the manual registration. This could be done:

  1. in a startup script; or
  2. in a lambda function executed during EC2_INSTANCE_LAUNCHING lifecycle hook.

In both cases, the script / code executed will need to:

  • Log in against Solution Manager server ( /login endpoint).
  • Invoke /servers endpoint with a post operation with the server data. The server name will be the instance id of the instance.

Scale-in process needs

When an auto scaling group terminates an instance, the instance is killed (it is not a normal shutdown), so the server might not be stopped normally and the license usage not released. It is necessary to free the license usage for the server and delete the server from Solution Manager catalog. These operations can be executed in a terminate instance lifecycle hook.

In order to perform the desired operations, a lambda function will execute the following operations:

  • Log in against the Solution Manager server ( /login endpoint).
  • Invoke /servers/deleteCloudServer endpoint with the instance id. This operation will try to find a server with a name corresponding to the given instance id. If the server exists the server is deleted. Also, if the server has an active license usage, this license usage is released.

Configure the scale-in process

In this section we describe how to configure the scale-in process. The sequence of steps we will follow is:

  • Download and configure the script that will be executed when an instance is terminated.
  • Create / Import a lambda function that will release the license for the terminated instance and will delete the instance from the Solution Manager catalog.
  • Create a CloudWatch event that will redirect termination events for the auto scaling group to the lambda created in previous step.
  • Add an EC2_INSTANCE_TERMINATING lifecycle hook to the auto scaling group to trigger the event.

Terminate instances script and configuration

A deployment package is a ZIP archive that contains function code and dependencies. The deployment package of the lambda function contains:

  • Terminate_instance.py script: deletes a server in the Solution Manager catalog and, if the server was up and running, it releases the corresponding license usage entry.
  • TerminateInstanceConfiguration.properties file: configuration file.
  • Another libraries / dependencies to run the script.

You can download the deployment package of this lambda function here: Terminate Instance Package

Unzip the deployment package in a folder.

Edit the TerminateInstanceConfiguration.properties file to configure the necessary data to access the Solution Manager. For instance:

com.denodo.sm.host=localhost

com.denodo.sm.port=10090

com.denodo.sm.user=username

com.denodo.sm.password=clearOrEncriptedPassword

com.denodo.sm.sslEnabled=false

Where:

  • com.denodo.sm.host: ip of the Solution Manager server.
  • com.denodo.sm.port: port of the Solution Manager server (not license manager server port).
  • com.denodo.sm.user: User to authenticate against Solution Manager server.
  • com.denodo.sm.password: Password to authenticate against Solution Manager server. The password value can be encrypted using “encrypt_password” script available in /bin folder of a Denodo Platform installation.
  • com.denodo.sm.sslEnabled: set this property to “true” if you have SSL/TLS enabled in the Solution Manager server.

When the configuration is ready, zip all the files inside the deployment package folder.

Now we need to create the lambda function in AWS and import the deployment package.

Create Lambda function

The documentation regarding lambda functions is available here.

In order to work with lambda functions there are several approaches. To test simple scripts, the easiest way is to use the lambda console :

  1.  Open the lambda console and click on option “Create a function” (this example link uses eu-central-1 region, make sure to select your region).

  1. Select the Author from scratch option to create a lambda function and select the Python 3.6 runtime.

  1. In the “Permissions” section, leave the default option. AWS will create and assign a role with the basic permissions to execute the lambda function.

Import Lambda function

Once the basic lambda function is created, we will update it with zip file that we created in the previous step.

In order to import the deployment package of the lambda function,  select the option “Upload a .zip file” and then select the zip file in your machine.

Update Handler to terminate_instance.lambda_handler 

Press the “Save” button in order to upload the function in the zip file.

Test the lambda function

The script defines a “lambda_handler” function to handle termination events for instances.

  1. It is possible to configure test events to execute the lambda with a specific event input.

You can test that the lambda function works correctly and has access to the Solution Manager server. Paste the following json in the event content:

{

        "detail": {

                "EC2InstanceId": "i-xxxxxxxxxxxxx"

        }

}

  1. To execute the test, select the event to simulate and press the “Test” button. You can check the execution result of the function in the execution result output and the logs stored in CloudWatch by clicking the “logs” option. In the case of the example json, the execution should fail if the instance “i-xxxxxxxxxxxxx” is not registered in the Solution Manager server.

Create CloudWatchEvent

We will use CloudWatch events to invoke the lambda function every time the auto-scaling group terminates an instance.

You can read more about the lifecycle hooks and notifications possibilities here.

In order to create and configure a CloudWatch event to invoke the desired lambda function during instance termination, you can follow the next steps:

  1. Open the CloudWatch console.
  2. Create a new rule in “Events” - “Rules”
  1. In “Event Source”, select “Event Pattern”.
  2. Select the option “”Events by service”

  1. Select the “Auto Scaling” service and the “Instance Launch and Terminate” option in event type. Select the 3 terminate instance events and the desired auto scaling group as shown below.

        Alternatively, we could also create the event editing the “Event Pattern” textarea with the following JSON (changing the AutoScalingGroupName attribute to the name of the corresponding auto scaling group):

{

  "source": [

    "aws.autoscaling"

  ],

  "detail-type": [

    "EC2 Instance Terminate Successful",

    "EC2 Instance Terminate Unsuccessful",

    "EC2 Instance-terminate Lifecycle Action"

  ],

  "detail": {

    "AutoScalingGroupName": [

      "ASG-LambdaTest"

    ]

  }

}

  1. Add the Lambda function as a target of the event rule. Click “Add target” option and select the lambda function.

  1. Click “Configure details” option at the bottom and fill in the name and description of the rule. Leave the state option check “Enabled” and click on the “Create rule” option.

Add Lifecycle hook to auto scaling group

Once we have the lambda function and the CloudWatch event created, we need to add the lifecycle hook to the auto scaling group for the terminating instance phase.

In order to create the Lifecycle Hook, go to the “Lifecycle Hooks” tab inside the auto scaling group and click “Create Lifecycle Hook”.

Fill the information with the configuration you want:

The Heartbeat timeout in a termination hook is the time that the instance remains in the Terminating:Wait state of the cycle. We recommend to change it to a lower value, about 60 seconds (default is 3600 seconds) than the default one to proceed with the termination process.

Configuring the scale-out process

In this section we describe how to configure the scale-out process. The sequence of steps we will follow is similar to the previous section:

  • Download and configure the script that will be executed when a new instance is launched.
  • Create / Import a lambda function that will register the new instance in the Solution Manager catalog.
  • Create a CloudWatch event that will redirect launch events of the auto scaling group to the lambda created in previous step.
  • Add an EC2_INSTANCE_LAUNCHING lifecycle hook to the auto scaling group to trigger the event.

Start instances script and configuration

In the same way as the terminate instance, there is a deployment package for the register lambda function. This package contains:

  • register_autoscaling_server.py script: Obtains the private ip of the launched instance with the describe instances operation and registers the server in the Solution Manager.
  • ServerData.json file: configuration file. In this configuration file you can define the Solution Manager connection properties and other values to register the server in the Solution Manager.
  • Other libraries / dependencies to run the script.

You can download the deployment package of this lambda function here:

Register Server Package

Unzip the deployment package in a folder.

Edit the “ServerData.json” file to configure the necessary data to access the Solution Manager and server default values:

The “register_autoscaling_server.py” script registers a server in a cluster in the Solution Manager catalog. The script receives a configuration file as an argument. This is an example configuration file:

{

 "com.denodo.sm.user" : "admin",

 "com.denodo.sm.password" : "clearOrEncryptedPassword",

 "com.denodo.sm.host" : "localhost",

 "com.denodo.sm.port" : 10090,

 "clusterId" : 2,

 "defaultDatabase" : "admin",

 "username" : "admin",

 "password" : "encryptedPassword",

 "port" : 9999,

 "useKerberos" : false,

 "usePassThrough" : false,

 "solutionManagerUsesSSL" : false

}

Where:

  • com.denodo.sm.user: User to authenticate against Solution Manager server
  • com.denodo.sm.password: Password to authenticate against Solution Manager server. The password value can be encrypted using “encrypt_password” script available in /bin folder of a Denodo Platform installation. You can also put the clear password, but it is not recommended.
  • com.denodo.sm.host: ip of the Solution Manager server.
  • com.denodo.sm.port: port where Solution Manager server is running.
  • clusterId: identifier of the cluster where you want to register the servers. You can get the cluster identifier exporting the Solution Manager catalog and finding the id of the desired cluster or using the rest api listing the clusters for the desired environment.
  • defaultDatabase: server default database.
  • username: user used to connect to the server.
  • password: the password used to connect to the server. Provide the password encrypted using “encrypt_password” script available in /bin folder of a Denodo Platform installation.
  • useKerberos: flag to specify if kerberos is used.
  • usePassThrough: create the revisions using the credentials of the user that is logged in the Solution Manager.
  • solutionManagerUsesSSL: flag to specify if Solution Manager server is configured with SSL, so the script invokes an https or http endpoint accordingly.

The script registers the server in the solution manager with:

  • Host / ip: The private ip of the AWS instance.
  • Name: the instance id of the AWS virtual machine: The name will be used to identify the servers, so it is mandatory to not update the name of these servers.

The example assumes a scenario where the instances run  in a private subnet (without public ips and unreachable from the internet). The Solution Manager server can be located in:

  • The same VPC. In this case you will need to configure the “com.denodo.sm.host” property with the private IP address of the instance where it is running.
  • On premises with a VPN to access the VPC in AWS. In this case, you can also configure the “com.denodo.sm.host” with the private ip of the Solution Manager server.

When the configuration is ready, save the changes and zip all the files inside the deployment package folder.

Now we need to create the lambda function in AWS and import the deployment package.

Create Lambda function

Perform the same steps as described in the terminate instances section.

Once the lambda function is created correctly, it is necessary to edit the role automatically created for the lambda function, in order to give the lambda function permissions to execute the DescribeInstances API operation invoked during the script execution.

Open the IAM console (you can access directly clicking on the role in the lambda function).

There are two options to add the new permissions:

  1. You can create a new policy with the following permissions:

{

        "Version": "2012-10-17",

        "Statement": [{

                "Effect": "Allow",

                "Action": [

                        "ec2:DescribeInstances"

                ],

                "Resource": "*"

        }]

}

 Then, attach the created policy to the role using the “Attach policies” option.

2. Or you can edit the policy using the “Edit policy” option

Add the following statement in the JSON tab to allow DescribeInstances operation

{

        "Effect": "Allow",

        "Action": [

                "ec2:DescribeInstances"

        ],

        "Resource": "*"

}

Click review policy option and save the changes.

Import Lambda function

Import the lambda function as described before for terminate instances lambda function.

In this case, update ‘Handler’ to register_autoscaling_server.lambda_handler and save.

Test the lambda function

You can test the lambda function defining a test event and executing the lambda function like in the terminate instances scenario.

Create CloudWatchEvent

We will use CloudWatch events to invoke the lambda function every time the auto scaling group starts a new instance.

You can read more about the lifecycle hooks and notifications possibilities here.

In order to create and configure a CloudWatch event to invoke the desired lambda function during instance launch, you can follow the next steps:

  1. Open CloudWatch console.
  2. Create a new rule in “Events” - “Rules”
  1. In “Event Source”, select “Event Pattern”.
  2. Select the option “”Events by service”

  1. Select the “Auto Scaling” service and “Instance Launch and Terminate” option in event type. Select the “EC2 Instance-launch Lifecycle Action” event and the specific auto scaling group.

        In this case, we could also create the event editing the “Event Pattern” textarea with the following JSON (changing the AutoScalingGroupName attribute to the name of the corresponding auto scaling group):

{

  "source": [

    "aws.autoscaling"

  ],

  "detail-type": [

    "EC2 Instance-launch Lifecycle Action"

  ],

  "detail": {

    "AutoScalingGroupName": [

      "ASG-LambdaTest"

    ]

  }

}

  1. Add the Lambda function as a target of the event rule. Click “Add target” option and select the lambda function.

  1. Click the “Configure details” option at the bottom and fill in the name and description of the rule. Leave the state option check  “Enabled” and click on the “Create rule” option

Add Lifecycle hook to auto scaling group

Once we have the lambda function and the CloudWatch event created, we need to add the lifecycle hook to the auto scaling group for the launching instance phase.

In order to create the Lifecycle Hook, go to “Lifecycle Hooks” tab inside the auto scaling group and click “Create Lifecycle Hook”.

Fill the information with the configuration you want:

The Heartbeat timeout in a termination hook is the time that the instance remains in the Pending:Wait state of the cycle. Change it to a lower value, about 30 seconds (default is 3600 seconds).

Questions

Ask a question
You must sign in to ask a question. If you do not have an account, you can register here

Featured content

DENODO TRAINING

Ready for more? Great! We offer a comprehensive set of training courses, taught by our technical instructors in small, private groups for getting a full, in-depth guided training in the usage of the Denodo Platform. Check out our training courses.

Training