Welcome back to this series on DMVPN Redundancy. In the last article, we considered a scenario where there were two DMVPN hubs, as shown below:

In that article, we said there were two options we could use to configure the design above:

  1. We can have a single DMVPN cloud with both hubs in the same cloud. This means that spoke routers will have only one tunnel interface.
  2. We can have dual DMVPN clouds with each hub controlling its own cloud. This means that spoke routers will have two tunnel interfaces, one for each DMVPN cloud.

In that article, we configured the first option and saw some of the benefits and disadvantages of using that option. In this article, we will configure the second option and discuss one major advantage of using this option over the first option: load balancing spokes between hubs.

CCNA Training – Resources (Intense)

For now, we will assume a situation where the path through HUB1 should be preferred (primary) while the path through HUB2 should only be used when HUB1 is unavailable. The configuration on HUB1 is as follows:

hostname HUB1
!
crypto isakmp policy 10
 hash md5
 authentication pre-share
!
crypto isakmp key cisco address 0.0.0.0 0.0.0.0
crypto isakmp keepalive 10 periodic
!
crypto ipsec transform-set TRANS_SET esp-3des esp-md5-hmac
 mode tunnel
!
crypto ipsec profile IPSEC_PROF
 set transform-set TRANS_SET
!
interface Tunnel1
 description ***PRIMARY DMVPN CLOUD***
 ip address 172.16.1.1 255.255.255.0
 no ip redirects
 no ip next-hop-self eigrp 10
 no ip split-horizon eigrp 10
 ip nhrp authentication cisco
 ip nhrp map multicast dynamic
 ip nhrp network-id 1
 delay 1000
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 1
 tunnel protection ipsec profile IPSEC_PROF
!
interface Ethernet0/0
 description ***LINK TO ISP1***
 ip address 192.0.2.2 255.255.255.252
!
interface Ethernet0/1
 description ***LAN***
 ip address 10.10.10.1 255.255.255.0
!
router eigrp 10
 network 172.16.1.0 0.0.0.255
 network 10.10.10.0 0.0.0.255
!
ip route 0.0.0.0 0.0.0.0 192.0.2.1

The configuration on HUB2 is as follows:

hostname HUB2
!
crypto isakmp policy 10
 hash md5
 authentication pre-share
!
crypto isakmp key cisco address 0.0.0.0 0.0.0.0
crypto isakmp keepalive 10 periodic
!
crypto ipsec transform-set TRANS_SET esp-3des esp-md5-hmac
 mode tunnel
!
crypto ipsec profile IPSEC_PROF
 set transform-set TRANS_SET
!
interface Tunnel1
 description ***BACKUP DMVPN CLOUD***
 ip address 172.16.2.2 255.255.255.0
 no ip redirects
 no ip next-hop-self eigrp 10
 no ip split-horizon eigrp 10
 ip nhrp authentication cisco
 ip nhrp map multicast dynamic
 ip nhrp network-id 2
 delay 1099
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 2
 tunnel protection ipsec profile IPSEC_PROF
!
interface Ethernet0/0
 description ***LINK TO ISP1***
 ip address 41.1.1.2 255.255.255.252
!
interface Ethernet0/1
 description ***LAN***
 ip address 10.10.10.2 255.255.255.0
!
router eigrp 10
 network 172.16.2.0 0.0.0.255
 network 10.10.10.0 0.0.0.255
!
ip route 0.0.0.0 0.0.0.0 41.1.1.1

Looking at the configuration on the hubs above, you will notice some differences, as follows:

  • There are two DMVPN clouds – 172.16.1.0/24 and 172.16.2.0/24.
  • The NHRP network IDs and tunnel keys on the hubs are different.
  • We are using a lower delay value on the tunnel interface of HUB1.

Let’s now move on to the configuration on the spokes. Since there will be two DMVPN clouds, it means we will create two tunnel interfaces on the spokes. Another thing to note is that since we will be using the same tunnel source for both tunnels, if we are going to create mGRE tunnels on the spokes, then we must use the shared keyword for the tunnel protection profile (you can refer to the first article in the series for more explanation about this). If you are using an IOS that does not support the shared keyword, then you must use point-to-point GRE interfaces.

The configuration on SPOKE1 is as follows:

hostname SPOKE1
!
crypto isakmp policy 10
 hash md5
 authentication pre-share
!
crypto isakmp key cisco address 0.0.0.0
crypto isakmp keepalive 10 periodic
!
crypto ipsec transform-set TRANS_SET esp-3des esp-md5-hmac
 mode tunnel
!
crypto ipsec profile IPSEC_PROF
 set transform-set TRANS_SET
!
interface Loopback0
 description ***LAN interface***
 ip address 10.10.20.2 255.255.255.0
!
interface Tunnel1
 description ***PRIMARY DMVPN CLOUD***
 ip address 172.16.1.10 255.255.255.0
 no ip redirects
 ip nhrp authentication cisco
 ip nhrp map multicast 192.0.2.2
 ip nhrp map 172.16.1.1 192.0.2.2
 ip nhrp nhs 172.16.1.1
 ip nhrp network-id 1
 delay 1000
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 1
 tunnel protection ipsec profile IPSEC_PROF shared
!
interface Tunnel2
 description ***BACKUP DMVPN CLOUD***
 ip address 172.16.2.10 255.255.255.0
 no ip redirects
 ip nhrp authentication cisco
 ip nhrp map multicast 41.1.1.2
 ip nhrp map 172.16.2.2 41.1.1.2
 ip nhrp nhs 172.16.2.2
 ip nhrp network-id 2
 delay 1099
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 2
 tunnel protection ipsec profile IPSEC_PROF shared
!
interface Ethernet0/0
 description ***INTERNET LINK***
 ip address 192.0.2.6 255.255.255.252
!
router eigrp 10
 network 172.16.1.0 0.0.0.255
 network 172.16.2.0 0.0.0.255
 network 10.10.20.0 0.0.0.255
!
! Default route to ISP
ip route 0.0.0.0 0.0.0.0 192.0.2.5

Note: The configuration on SPOKE2 is similar to the one on SPOKE1 except for IP addressing; therefore, it is not shown here. SPOKE2 will have an IP address of 172.16.1.11 in the primary DMVPN cloud and 172.16.2.11 in the backup DMVPN cloud.

Let’s look at some verification commands. Because we have altered the delay value on the tunnel interfaces of the primary DMVPN cloud, then a router behind the hubs (“INTERNAL”) should prefer HUB1 for access to the spoke networks:

Also, if we look at the routing table on the spoke routers, the path through HUB1 should also be preferred for EIGRP advertised routes:

We can test failover by shutting down HUB1’s interface and looking at the routing table on the spoke router again:

Load-Balancing Between Hubs

Something interesting that can be done with configuring dual DMVPN clouds is that we can load balance between the hubs. Load balancing in this case means that some spokes use HUB1 as their primary path while other spokes use HUB2 as their primary path. In both cases, the other hub will be used for failover.

Note: It is also possible to load balance in a dual-hub single DMVPN cloud deployment (previous article) but it’s easier with this option.

To achieve this design, we need to account for asymmetric routing; i.e., we need to ensure that the tunnel used by a spoke to reach the hub network is the same tunnel that will be used by the hub network to reach the spoke and vice versa.

Let’s take it one step at a time. For the first step, let’s configure SPOKE1 to use HUB1 as its primary hub while SPOKE2 will use HUB2 as its primary hub. This is easy to accomplish in EIGRP by adjusting the delay values on the tunnel interfaces. On SPOKE1, the configuration will be (no change):

interface Tunnel1
 description ***PRIMARY DMVPN CLOUD***
 delay 1000
interface Tunnel2
 description ***BACKUP DMVPN CLOUD***
 delay 2000

The configuration on SPOKE2 will be:

interface Tunnel1
 description ***BACKUP DMVPN CLOUD***
 delay 2000
interface Tunnel2
 description ***PRIMARY DMVPN CLOUD***
 delay 1000

Note: Choose the delay value on the backup DMVPN cloud in such a way that it is higher than the one on the primary DMVPN cloud but also such that only one tunnel is used per time.

With this configuration, let’s take a look at the routing table on the spoke routers:

However, even though this configuration takes care of the spoke side, asymmetric routing will occur with the current state of the network because SPOKE2 will use HUB2 to reach the network behind the hubs but the return traffic will go through HUB1:

To solve the asymmetric routing problem, we will first set the delay on the hubs to the same value.

interface Tunnel1
 delay 1000

We can then use offset lists on the spokes to increase the metric for the spoke network advertised through the backup tunnel interface. For example, on SPOKE1, our configuration will be as follows:

ip access-list standard INCR_METRIC
 permit 10.10.20.0 0.0.0.255
!
router eigrp 10
 offset-list INCR_METRIC out 5120 Tunnel2

The configuration on SPOKE2 will be as follows:

ip access-list standard INCR_METRIC
 permit 10.10.30.0 0.0.0.255
!
router eigrp 10
 offset-list INCR_METRIC out 5120 Tunnel1

When we now check the routing table of the INTERNAL router behind the hub routers, we see that traffic to SPOKE1’s network will be sent through HUB1 while traffic to SPOKE2’s network will be sent through HUB2:

Summary

This brings us to the end of this article where we have configured a dual hub DMVPN design with dual DMVPN clouds. We also saw how to load balance spokes between the hubs.

I hope you have found this article insightful and I look forward to presenting the next scenario in the series.

References and Further Reading