This article will be about the high availability and load balancing options in Amazon AWS.

Let’s suppose that you have a website that you are hosting on a physical server.

Time goes by and you have more and more visitors, each of them using little from the server resources (memory, CPU). Due to this, the website will be slower and the content will not be provided.

What do you do? You either replace that server with one more powerful or buy another one and use it in parallel with the older one.

In case you go with the first option, you will have single point of failure. The service will stop if the server will fail.

In case you go with the second option, you will not have the single point of failure. If you lose one server, you will see a degradation of the service, but it will still work.

To go with the second option, you need to configure the two servers in such a way that both could serve the users and in the eventuality of a failure, one could take all the load.

This implies that you need to configure some sort of load balancing on the servers. Perhaps add another server that will act as a proxy for the two servers.

This means you need to have the technical skills to configure this. And more important is to have the skills to troubleshoot this solution.

The scope of the article is not to show you how you can deploy such configuration on Linux or Windows servers, but to show you how you can achieve something like this on Amazon AWS.

The assumption is that you have a server with Linux on it. It can be any kind of server, physical or a virtual one.

For the purpose of this article, I will assume that you just finished practicing what you’ve learned in this article, Translating your Windows/Linux server skills for the cloud: How to deploy a server in Amazon AWS.

Right now I have two Linux servers on AWS which run WordPress. The content that I put on the two servers is almost identical. The WordPress was manually installed on both servers and an identical post was created on both of them.

This is a screenshot of the WordPress instance from the first server:

And this is from the second server:

At the very first sight, everything seems to be identical.

But if you would look better you would see that the tag line for the first server says “WordPress Blog – Server 1” and for the second server is “WordPress Blog – Server 2”. This small difference will be used later to demonstrate the load balancing.

Of course in production, you will need to have identical content on all the servers serving the content. It will be embarrassing that two users get different information, based on which server their requests are landing.

Amazon AWS allows you to configure an Elastic Load Balancer (ELB). In a short description, you will create a frontend resource that will be accessed by the users. This will act like a proxy. That frontend resource will access one of the servers and provide the information from there to the user. The next user will be served by the proxy from the next server and so on. You can add multiple servers for which you can configure an ELB. You are not limited to only two like in this article.

From the AWS console, choose EC2:

From the left column, choose Instances under INSTANCES section:

You can see the two servers that will provide the content:

From the same left column, choose ‘Load Balancers’ under NETWORK & SECURITY section to begin the process of ELB creation:

Configure a name of your choice for the ELB and continue:

Configure the health check options and continue. In this specific case, I configured the ELB to monitor the presence and reachability of /wp-blog/. I also altered some default timers to speed up the failure detection or ability to be able to serve again. The drawback for having faster failure detection is that you add computation burden on the system.

Assign the security group and continue:

Add the instances that you want to be part of this ELB and continue:

Review what you did and create the ELB. You will be present with the list of available ELBs:

Right now, the ELB is not functional. This is seen on Status where I have “0 of 2 instances in service.” A little bit later the status will change to “2 of 2 instances in service.” If you click on that, you will be directed to Instances tab:

Now both instances are in InService state and one can access the ELB by using the value from “DNS Name” column. As a matter of fact, the WordPress blog can be accessed using:

There is some monitoring done automatically which can be accessed by using the Monitoring tab. As you can see, we have two healthy hosts in the ELB:

Let’s access this link twice and confirm that we will get to two different servers:

We landed on Server 2 and the next time when we access the same link, we land on Server 1:

As you can see the HTTP requests are balanced between the two instances that are part of the ELB.

Using ELB saves you from having headaches on how to configure load balancers. You can do it very quickly using the AWS feature.

As a side note, AWS has a template for a high availability WordPress deployment. It’s almost identical to the one that you can use to deploy a single instance of WordPress.

This can be deployed by choosing CloudFormation from the AWS console:

Create a new stack and choose the WordPress High Available template:

Continue through the steps, review what you did and then create the stack. You will see something telling you that the deployment is still in progress:

After quite some time (in this my case, around 25 minutes), the process is finished and to access the WordPress from a browser, I have to go to Outputs tab to get the URL:

When I’m accessing the URL (, I’m redirected to the WordPress initial installation process:

From this point on, everything is the same as with the manual installation of WordPress.

When you choose this template, AWS automatically deploy the EC2 instances, the ELB and the RDS database.

This is easier than you having to deploy the EC2 instances, then ELB and so on. But the problem is that the templates are available for specific Web applications.

If you have your in-house built application, then you will have to do this manually as we did in the first part of the article.

As you can see, using AWS ELB is a fast method to improve the service provided to the users by adding high availability and load balancing.