Welcome to the final article in this DMVPN Redundancy series. In previous articles, we have considered different scenarios, such as single hub with dual ISP links, dual hubs, and even spokes with dual ISP links.

CCNA Training – Resources (Intense)

This article will be more a proof of concept that I think can be a working solution with some fine tuning. In this article, we will configure spokes to register with the NHS using FQDN instead of an IP address and we will also see how we may achieve redundancy with this type of configuration.

DMVPN Configuration using FQDN

Normally, for next-hop clients (NHCs or spokes) to register with the next-hop server (NHS or the hubs), we need to statically map the logical IP address (tunnel IP address) of the NHS to its NBMA IP address (physical IP address). For example, look at the configuration snippet below:

ip nhrp map 10.1.123.1 41.1.10.2
ip nhrp nhs 10.1.123.1

As you can see, the logical IP address of the NHS (10.1.123.1) is statically mapped to the physical IP address of that NHS (41.1.10.2). However, there are situations where it is not ideal to use the physical IP address of the NHS, for example, if the physical IP address of the NHS is constantly changing. In such cases, it may be more ideal to use a fully qualified domain name (FQDN) and then let the NHCs query a DNS server for the physical IP address associated with that FQDN.

Note: If the NHS’s IP address is constantly changing, then the DNS entry must also change to match the current IP address. Dynamic DNS (DDNS) takes care of situations like this.

There are two options for configuring NHCs to register with an NHS using FQDN:

  • Specifying the logical IP address along with the FQDN, e.g., ip nhrp nhs 10.1.123.1 nbma hub1.example.com multicast
  • Letting the NHC dynamically learn the logical IP address of the NHS from the NHRP registration reply from the hub, e.g., ip nhrp nhs dynamic nbma hub1.example.com multicast

You can read more about DMVPN configuration using FQDN here.

Dual Hub, Dual ISP Links, DMVPN using FQDN

Having introduced the DMVPN configuration using FQDN, I want to use this feature to configure a scenario we have considered before, as shown in the diagram below:

One of the options we configured when we looked at this scenario was to have both hubs in one DMVPN cloud and we configured two next hop servers on the spokes. You can find that article here. Let’s reconfigure this scenario but this time, we will use FQDN for our NHS configuration.

Side note on DNS

Ideally, you want to have a smart DNS solution that can be configured in several ways. One way is “round-robin,” where the DNS server will give out the IP address of HUB1 for the first DNS request, then the IP address of HUB2 for the next DNS request, and so on.

Another option is to configure the DNS server to respond based on geographical location, e.g., give out the IP address of HUB1 to a spoke closer to that hub and the IP address of HUB2 to another spoke closer to HUB2.

Whatever the case, the DNS server should be able to track availability of the IP addresses tied to an FQDN so that it does not give out an IP address that is unavailable in response to a DNS request.

For this article, I will be using a local BIND9 DNS server installed on a Linux machine. This DNS server is configured in round-robin fashion but is not smart enough to know when an IP address is down and would continue sending that IP address in DNS replies.

One trick I have used to overcome this is to use a small cache entry lifetime so that the spokes will query the DNS server again when the cache expires. Due to the round-robin deployment, the DNS server will eventually give out the available IP address.

To learn more about deploying BIND9 on an Ubuntu system, refer to this link. Below is my BIND9 zone file:

;
; BIND data file for local loopback interface
;
$TTL	60
@	IN	SOA	ns.example.com. root.example.com. (
			 10		; Serial
			 60		; Refresh
			 60		; Retry
			 60		; Expire
			 60 )		; Negative Cache TTL
;
@	IN	NS	ns.example.com.
ns	IN	A	192.0.2.10
@	IN	AAAA	::1
;
DMVPN-HUBS	IN	A	41.1.1.2
			IN	A	192.0.2.2

You may be able to get it to work by configuring a Cisco router as a DNS server and using the ip host command to define hostname/IP address mappings.

The configuration on the hubs is the same as in this article (although you can leave out the delay configuration on the tunnel interfaces) so you can just copy it from there. The configuration on the spokes is also similar but because of the DNS configuration, I will post the entire configuration of one of the spokes here again:

hostname SPOKE1
!
ip domain name example.com
ip name-server 192.0.2.10
!
crypto isakmp policy 10
 hash md5
 authentication pre-share
!
crypto isakmp key cisco address 0.0.0.0  
crypto isakmp keepalive 10 periodic    
!
crypto ipsec transform-set TRANS_SET esp-3des esp-md5-hmac 
 mode tunnel
!
crypto ipsec profile IPSEC_PROF
 set transform-set TRANS_SET 
!
interface Loopback0
 description ***LAN interface***
 ip address 10.10.20.2 255.255.255.0
!
interface Tunnel1
 description ***DMVPN CLOUD***
 ip address 172.16.0.10 255.255.255.0
 no ip redirects
 ip nhrp authentication cisco
 ip nhrp network-id 1
 ip nhrp nhs dynamic nbma DMVPN-HUBS.example.com multicast
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 1
 tunnel protection ipsec profile IPSEC_PROF
!
interface Ethernet0/0
 description ***INTERNET LINK***
 ip address 192.0.2.6 255.255.255.252
!
router eigrp 10
 network 172.16.0.0 0.0.0.255
 network 10.10.20.0 0.0.0.255
!
! Default route to ISP
ip route 0.0.0.0 0.0.0.0 192.0.2.5

Looking at the configuration above, I have defined a DNS server at an IP address of 192.0.2.10. We defined the NHS as “DMVPN-HUBS.example.com” and since we didn’t specify the logical IP address of the NHS, the spoke will learn this address dynamically. This is important because I want the spoke to learn the right logical IP address of the NHS that is resolved from the DNS request.

With this configuration, the spoke will try to resolve the IP address of “DMVPN-HUBS.example.com” using its DNS server. We can view the result of the resolution by using the show hosts command:

As you can see, the first address for that FQDN is 41.1.1.2, which is the physical IP address of HUB2. Therefore, SPOKE1 will try to use 41.1.1.2 as the NHS:

We can confirm that the tunnel was built with the hub and that routes are being advertised via EIGRP:

Now let’s check SPOKE2. The output of the show hosts command is below:

SPOKE2 got 192.0.2.2 (HUB1) as the first IP address so it will use that as the NHS NBMA address:

We can also check the routing table of a router on the LAN of the hubs to see the spoke networks:

Now, to test redundancy, I will shut down HUB1’s ISP interface. The DMVPN tunnel on SPOKE2 will fail and it will attempt to resolve the FQDN. The debug output (debug dmvpn all all) below shows the resolution process:

...
*Oct  7 16:38:52.124: NHRP: Resolving FQDN DMVPN-HUBS.example.com (IPv4)
*Oct  7 16:38:52.124: NHRP: Requesting for IPv4 type DNS record
*Oct  7 16:38:52.124: NHRP: FQDN response delayed because DNS server needs to be contacted.
*Oct  7 16:38:52.126: NHRP: DNS Resolver Callback returned NBMA: 41.1.1.2
*Oct  7 16:38:52.126: NHRP: Resolved FQDN DMVPN-HUBS.example.com to 41.1.1.2
...

Because of the unpredictable behavior of the DNS server, you may not immediately get the other hub’s IP address and it may still be returning the IP address of the failed hub, which is why you need a smart DNS solution. In my case, it took a full 3 minutes before it finally got the IP address of HUB2!

With this resolution, SPOKE2 will now also be connected to HUB2 and the routes will be advertised via EIGRP:

Keep in mind that as long as the NHS to which a spoke is connected to is up, the spoke does not try to resolve the FQDN. Therefore, even if I bring HUB1 back up, SPOKE2 will not attempt to reconnect to that hub. There may be ways to achieve this but I’m not sure it’s worth the effort.

Summary

This brings us to the end of this article, where we have seen how to configure DMVPN by specifying the NHS on spokes using an FQDN instead of an IP address. We also saw that by manipulating the DNS records, we can achieve some form of redundancy between dual hubs.

I hope you have found this article and the entire series insightful.

References and Further reading

  • DMVPN Redundancy: Dual Hub, Dual ISP Links: http://resources.intenseschool.com/dmvpn-redundancy-dual-hub-dual-isp-links/
  • DMVPN Configuration Using FQDN: http://www.cisco.com/en/US/docs/ios-xml/ios/sec_conn_dmvpn/configuration/15-2mt/sec-conn-dmvpn-conf-using-fqdn.html
  • DNS Configuration on Ubuntu:
    https://help.ubuntu.com/lts/serverguide/dns-configuration.html