This is the third part of the series discussing AWS Route 53 service. In the first part we saw how you can register a domain with Route 53 and how you can create a resource record. In the second part we saw what DNS failover is and how can be configured.

In the third part of the series we will discuss latency and weighted DNS routing.

When a resource record set is created, a routing policy can be configured to determine how Route 53 responds to DNS queries.

VMware Training – Resources (Intense)

There are multiple routing policies that can be configured and we will discuss only two of them: latency and weighted polices

  • * Latency Routing Policy—It is used when multiple resources are providing the same function and Route 53 should answer with the resources that provide the lowest latency. Basically this is about the fastest response. When a DNS request is received, Route 53 will select the resource from the region that gives the lowest latency.
  • * Weighted Routing Policy—It is used when there are multiple resources that can be used and you would like to route the traffic to those resources in proportions. This can be used for load balancing or for new applications testing. You could split the traffic into, let’s say, 10% to one server and 90% to other server. When a DNS query is received, Route 53 searches for a group of resource record sets that have the same name and type. Each resource has an assigned weight that determines the probability of being selected. The probability of being selected is given by the ratio of “weight of the resource/sum of the weights of the resources.” For instance, considering the above example, this would mean that one server would have the weight of 10 and the other 90.

So we will start by looking at the latency routing. I have one EC2 instance running in EU CENTRAL (Frankfurt) region and one EC2 instance in US EAST (N. Virginia). Each EC2 instance is running a web server and each one of them is returning a customized output to identify in which region is the EC2 instance running.

What we will demonstrate first is that we can route the user based on latency. For instance, we will see that users from Europe will be served the webpage from the EC2 instance running in the EU CENTRAL region and the users from US will be served the webpage from the EC2 instance running in the US EAST region.

Of course, in real life, the same webpage should be served to the user regardless of where the webpage is accessed. But for testing purposes, to confirm that we are reaching different servers, we created different content for each EC2 instance.

This is the EC2 instance running in EU CENTRAL region. Note the public IP:

And this is the EC2 instance running in US EAST region:

You can see the specific content returned when we access the HTTP server from EU CENTRAL region:

And the one from the server from US EAST region:

As we saw in Part I, we have the domain vtep.net that we can use for testing purposes. We will create two resource sets with the same name, www.vtep.net, which will point to two IP addresses.

So let’s connect to Route 53 in Amazon Management Console and you will see that we have one domain and one hosted zone.

Select “Hosted Zones” and then “Go to Record Sets”:

Click on “Create Record Set” to create the first record set. The record set will be for www.vtep.net:

In the “Value” field put the IP address of the EC2 instance from EU CENTRAL region. Select “Latency” as “Routing Policy.” The “Region” field will be filled automatically based on the IP address. In this case, it will be “eu-central-1.” As “Set ID” choose something that is meaningful for you. Then click on “Create”:

Create another record set with the same name, www.vtep.net. In the “Value” field put the IP address of the EC2 instance from US EAST region. Select “Latency” as “Routing Policy.” The “Region” field will be filled automatically based on the IP address. In this case, it will be “us-east-1”. As “Set ID” choose something that is meaningful for you. Then click on “Create”:

As you can see, the two new record sets were created:

So now, based on the location from where we access www.vtep.net, we might be redirected to either US EAST region or EU CENTRAL region.

Let’s test it. I configured a proxy on my browser with an IP address from Switzerland. Obviously, Switzerland is closer to Frankfurt than to North Virginia:

As expected, I’m being directed to the web server from the EU CENTRAL region because of the lower latency:

Let’s test the same website by accessing it from a location from US. Another proxy was configured and now I appear as I’m accessing the website from US:

As you can see, I’m directed to the EC2 instance from US EAST region:

And that’s all with the latency routing in Route 53. Route 53 will check what resource is closer to you and will respond with that resource to your DNS queries.

Let’s continue with weighted routing. The purpose of this test will be to demonstrate that the DNS queries return the resources in the ratio that we will specify. More exactly, we want to distribute evenly the access to the website. Half of the access will be done through the EC2 instance from EU CENTRAL region and half of the access will be done through EC2 instance from US EAST.

Everything is the same as before (we will create www.vtep.net) up to the point where the resource sets have to be configured. The IP addresses used are the same. The difference will come from the “Routing Policy.” In this case, we will select “Weighted.” Then select the value for “Weight.” Because we want to have 50% through EU CENTRAL region, we will use 50. For “Set ID,” use something that is meaningful for you to identify the record set:

The same for the second record set, except that you will use the IP address from the EC2 instance from the US EAST region:

Because it’s hard to see the distribution in the Route 53 responses, we will use a script that will check 1000 times (one check per second) what is the resource record for the www.vtep.net. The script is using the Linux command “dig” to get this information and I save only the relevant content: the IP address returned when the command was issued and how long it took to get the answer.

The script is returning something like this:

www.vtep.net. 34 IN A 54.93.72.190

;; WHEN: Tue Jan 13 12:43:35 2015

www.vtep.net. 34 IN A 54.174.204.200

;; WHEN: Tue Jan 13 12:43:37 2015

www.vtep.net. 14 IN A 54.174.204.200

;; WHEN: Tue Jan 13 12:43:38 2015

www.vtep.net. 29 IN A 54.93.72.190

;; WHEN: Tue Jan 13 12:43:39 2015

Let’s check how many entries we have for 54.93.72.190, which is the IP from EU CENTRAL and how many for 54.174.204.200 which is the IP from US EAST:

[UBUNTU:/] access% cat output | grep 190 | wc -l

512

[UBUNTU:/] access% cat output | grep 200 | wc -l

488

[UBUNTU:/] access%

As you can see, it’s almost a perfect even distribution: 51.2% and 48.8%. In time, this will tend to be closer to 50/50.

And our weighted routing policy worked.

And that’s all with latency and weighted routing policies in Route 53.

We have reached the end of the article and the Route 53 series.

In this article you found out what latency and weighted routing policies are and how you can configure them.

Now, by going through all three parts of the Route 53 series, you should be familiar with the service. Now that the foundation is there, you can go and explore the mode advanced features that Route 53 provides.

References

  1. Amazon Route 53
  2. Choosing a Routing Policy