This article is the second part of the series discussing Auto Scaling service in AWS.

In this article we will discuss the following:

  • Scaling policies
  • Load balance the scaling group
  • Delete scaling group

In the first article we discussed Scaling Plans and one of them was scale on demand. When scaling on demand is used, there has to be a definition on how the scaling should happen as a response for changing conditions. The Auto Scaling group can scale up or scale down based on meeting these conditions.

VMware Training – Resources (Intense)

Auto Scaling groups uses alarms to determine when the conditions for terminating or launching instances are met.

An alarm is monitoring a single metric for a time period that is user definable. The user is setting thresholds for that metric as well. If those thresholds are crossed, then the alarm is performing an action. In the context of Auto Scaling, a message is sent to the Auto Scaling.

Auto Scaling is using policies as well. A policy dictates how Auto Scaling should react when an alarm message is received.

So, if scaling on demand is being used, an alarm and a policy should be created and associated with the Auto Scaling group. When the message is received, the policy associated is executed and the group is scaled out (launch new instances) or scaled in (terminate instances).

Auto Scaling is using Cloudwatch for metrics and alarms.

Whenever a scaling policy is being executed, the size of the Auto Scaling group is changed. The change can be an absolute value, a percentage of the current value or an increment. The recommendation is to use two scaling policies, one for scaling out and one for scaling in. For instance, if you are monitoring the CPU usage and you have a spike that triggers an alarm which in turn causes Auto Scaling to create another EC2 instance, it might be wise to scale in once the spike has gone.

How do you set up scale on demand? The process is similar to what we have done in the first part of the series. I will put only the steps where you enable CloudWatch monitoring and where you set the policies.

I will create an Auto Scale launch configuration with two initial instances and enable CloudWatch detailed monitoring:

Next, everything is the same as in the first part of the series, up to this point configuring the scaling policies in the Auto Scaling group:

As you might remember from the first part, we need to use the second option to use scaling policies. If you select the second option, you will see two policies and this is one of them:

However, as you can see, we don’t have any alarms configured. Click on “Add new alarm” to add a new alarm. In this case, the alarm is sending a message when the average of CPU usage goes above 80 percent for at least five minutes. I also choose to receive an email when this is happening. You can notice the name of the alarm. This will be used later in the policy configuration:

Once you click on “Create Alarm”, you will that the policy will be executed when the threshold will be breached. Also, the action is to add another two instances:

As I said, we need another policy that will terminate the instances once the CPU usage decreases. I configured the other alarm to be triggered when the average CPU usage for five minutes will be below 10 percent. The action will be to terminate two instances:

The next steps, notifications and tagging, are the same. Once you are done with the Auto Scaling group configuration, you should see two instances being created:

To test the policies, I will launch a script in the two EC2 instances to bring the CPU usage higher than 80 percent for more than five minutes.

You can check the CPU usage for an instance by selecting the EC2 instance from “Instances” menu from EC2 console. Then select the “Monitoring” tab and click on the “CPU Utilization”:

The script was launched and this is the CPU usage for one of the EC2 instances. As you can see we are about at 20% percent:

Let’s check for the last five minutes. We are at almost 70 percent:

After the average CPU usage was higher than 80 percent for more than five minutes, I received the notification through email. This is a small part of the email:

And you can see that another two instances were launched and now they are initialized:

After the average CPU load dropped below 10 percent, I received another notification saying this and now I expect that two instances will be terminated:

And this is how you can do dynamic scaling using Auto Scaling group together with alarms and policies.

Next we will discuss the load balance of Auto Scaling groups.

Auto Scaling is being used in conjunction with ELB. You can read more about ELB in this article How to deploy high availability and load balancing in Amazon AWS.

It’s possible that when you have many instances in one Auto Scaling group, not all the EC2 instances have the same load. ELB is helping you to optimally route the traffic to the EC2 instances so that no instance is overwhelmed.

How can you do that?

You should create a load balancer as mentioned in the article above. You can skip the step where you are being asked which EC2 instances you want to register.

In my case, I have one ELB created:

After this, you need to create a new launch configuration where all the steps are identical to the ones shown in the first part of the article. Then you will need to create the Auto Scaling group and after the first step, there are some changes that you need to do. Expand “Advanced Details” and check “Receive traffic from Elastic Load Balancer(s)” and then select the ELB that you created. Choose the “Health Check Type” as “ELB” and you can proceed with the rest of the steps:

Make sure that your security groups rules defined in the launch configuration allow the service that you want to use on your instances. For instance, http.

To check that the Auto Scaling group was launched with the ELB, select the Auto Scaling group and on “Details”, you should see in the “Load Balancers” field, the name of your ELB:

If you run the same services on both EC2 instances and the ELB will have both EC2 instances “InService”, then the traffic will be distributed to the two instances of your Auto Scaling group.

To delete an Auto Scaling group, from the “Auto Scaling Groups” menu, select the group, and from “Actions” menu, select “Delete”:

And we reached the end of the series dealing with Auto Scaling.

We found out what Auto Scaling is and for what would you use it. We launched an Auto Scaling group with fixed size and also one that was scaling on demand. We saw how you can use Elastic Load Balancer together with Auto Scaling.

I hope you will find useful this two-part article when it will come the time to deploy Auto Scaling in AWS.

Reference