A few weeks ago, I was asked a question about how to configure DMVPN where routers have multiple links (ISPs) and that question is what spurred this DMVPN Redundancy series.

In this series, we will consider several scenarios that involve redundancy in a DMVPN configuration. For this article, we will look at a case where there is a single DMVPN hub router but which has multiple ISP connections.

CCNA Training – Resources (Intense)

For this scenario, we will use the following diagram:

In the diagram, the HUB router has two connections to the Internet via two different ISPs; the remote sites (spokes) have only one connection to the Internet. In the real sense of it, the HUB router is a single point of failure and this design is not advisable; however, we still have designs like this today.

Note: You can have variations of this diagram. For example, one link may be via a WAN (e.g. MPLS) while the other is over the Internet.

For this scenario, we will assume a primary/backup situation where the 192.0.2.0/30 is the primary link at the hub and the 41.1.1.0/30 is the backup link. The high-level solution is that there will be two DMVPN clouds and we will use a routing protocol to control which path should be preferred (even though both paths will be up together).

The configuration on the HUB router is as follows:

hostname HUB
!
track 1 ip sla 1 reachability
!
crypto isakmp policy 10
 hash md5
 authentication pre-share
crypto isakmp key cisco address 192.0.2.6
crypto isakmp key cisco address 41.1.1.6
!
!
crypto ipsec transform-set TRANS_SET esp-3des esp-md5-hmac
 mode tunnel
!
crypto ipsec profile IPSEC_PROF
 set transform-set TRANS_SET
!
interface Tunnel1
 description ***PRIMARY DMVPN CLOUD***
 ip address 10.1.123.1 255.255.255.0
 no ip redirects
 no ip next-hop-self eigrp 10
 no ip split-horizon eigrp 10
 ip nhrp authentication cisco
 ip nhrp map multicast dynamic
 ip nhrp network-id 1
 delay 1000
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 1
 tunnel protection ipsec profile IPSEC_PROF
!
interface Tunnel2
 description ***BACKUP DMVPN CLOUD***
 ip address 10.2.123.1 255.255.255.0
 no ip redirects
 no ip next-hop-self eigrp 10
 no ip split-horizon eigrp 10
 ip nhrp authentication cisco
 ip nhrp map multicast dynamic
 ip nhrp network-id 2
 tunnel source Ethernet0/1
 tunnel mode gre multipoint
 tunnel key 2
 tunnel protection ipsec profile IPSEC_PROF
!
interface Ethernet0/0
 description ***PRIMARY LINK***
 ip address 192.0.2.2 255.255.255.252
!
interface Ethernet0/1
 description ***BACKUP LINK***
 ip address 41.1.1.2 255.255.255.252
!
interface Ethernet0/2
 description ***LAN***
 ip address 10.10.10.1 255.255.255.0
!
router eigrp 10
 network 10.1.123.0 0.0.0.255
 network 10.2.123.0 0.0.0.255
 network 10.10.10.0 0.0.0.255
!
ip route 0.0.0.0 0.0.0.0 192.0.2.1 track 1
ip route 0.0.0.0 0.0.0.0 41.1.1.1 10
!
ip sla auto discovery
ip sla 1
 icmp-echo 192.0.2.1
 frequency 10
ip sla schedule 1 life forever start-time now

Let’s break down the configuration above. I have configured two pre-shared keys for the spokes although we could have use a wildcard PSK. I have also created two mGRE (Multipoint GRE) tunnel interfaces on the hub router – Tunnel1 (10.1.123.0/24) and Tunnel2 (10.2.123.0/24). On Tunnel1, I have changed the delay to a smaller one than the default value (50000 microseconds) – this means that routes advertised via Tunnel1 will be preferred over those advertised via Tunnel 2.

I have also configured IP SLA so that the router uses the primary link when that is available and fails over to the backup link in the event that the primary link fails.

Now let’s move on to the configuration on SPOKE1:

hostname SPOKE1
!
crypto isakmp policy 10
 hash md5
 authentication pre-share
crypto isakmp key cisco address 192.0.2.2
crypto isakmp key cisco address 41.1.1.2
crypto isakmp key cisco address 41.1.1.6
!
crypto ipsec transform-set TRANS_SET esp-3des esp-md5-hmac
 mode tunnel
!
crypto ipsec profile IPSEC_PROF
 set transform-set TRANS_SET
!
interface Loopback0
 description ***LAN interface***
 ip address 10.10.20.2 255.255.255.0
!
interface Tunnel1
 description ***PRIMARY DMVPN CLOUD***
 ip address 10.1.123.2 255.255.255.0
 no ip redirects
 ip nhrp authentication cisco
 ip nhrp map multicast 192.0.2.2
 ip nhrp map 10.1.123.1 192.0.2.2
 ip nhrp network-id 1
 ip nhrp nhs 10.1.123.1
 delay 1000
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 1
 tunnel protection ipsec profile IPSEC_PROF shared
!
interface Tunnel2
 description ***BACKUP DMVPN CLOUD***
 ip address 10.2.123.2 255.255.255.0
 no ip redirects
 ip nhrp authentication cisco
 ip nhrp map 10.2.123.1 41.1.1.2
 ip nhrp map multicast 41.1.1.2
 ip nhrp network-id 2
 ip nhrp nhs 10.2.123.1
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 2
 tunnel protection ipsec profile IPSEC_PROF shared
!
interface Ethernet0/0
 description ***INTERNET LINK***
 ip address 192.0.2.6 255.255.255.252
!
router eigrp 10
 network 10.1.123.0 0.0.0.255
 network 10.2.123.0 0.0.0.255
 network 10.10.20.0 0.0.0.255
!
! Default route to ISP
ip route 0.0.0.0 0.0.0.0 192.0.2.5

In the configuration, we have configured three pre-shared keys: two for the Hub router and the 3rd one for the IP address of the other spoke.

Note: If you are doing pre-shared key authentication, you need a PSK for every spoke to which you want to form dynamic spoke-to-spoke tunnels except you use a wildcard PSK.

We have also configured two tunnel interfaces, one for each DMVPN cloud. Notice that I have also changed the delay value on the primary tunnel interface so that routes advertised via that tunnel can be preferred.

Finally, notice the shared keyword on the tunnel protection ipsec profile command. This keyword is necessary because we are using the same interface as the source of both tunnels configured on this router. Without using this keyword, packets may get sent to the wrong tunnel interface after decryption. Note that when the shared keyword is used, you must configure unique tunnel keys on the tunnels (which may cause performance issues on certain devices that don’t support hardware acceleration for tunnels configured with the tunnel key command).

The configuration on SPOKE2 is very similar to the one on SPOKE1 but for completeness sake, here it is:

hostname SPOKE2
!
crypto isakmp policy 10
 hash md5
 authentication pre-share
crypto isakmp key cisco address 192.0.2.2
crypto isakmp key cisco address 41.1.1.2
crypto isakmp key cisco address 192.0.2.6
!
!
crypto ipsec transform-set TRANS_SET esp-3des esp-md5-hmac
 mode tunnel
!
crypto ipsec profile IPSEC_PROF
 set transform-set TRANS_SET
!
interface Loopback0
 description ***LAN***
 ip address 10.10.30.3 255.255.255.0
!
interface Tunnel1
 description ***PRIMARY DMVPN CLOUD***
 ip address 10.1.123.3 255.255.255.0
 no ip redirects
 ip nhrp authentication cisco
 ip nhrp map multicast 192.0.2.2
 ip nhrp map 10.1.123.1 192.0.2.2
 ip nhrp network-id 1
 ip nhrp nhs 10.1.123.1
 delay 1000
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 1
 tunnel protection ipsec profile IPSEC_PROF shared
!
interface Tunnel2
 description ***BACKUP DMVPN CLOUD***
 ip address 10.2.123.3 255.255.255.0
 no ip redirects
 ip nhrp authentication cisco
 ip nhrp map 10.2.123.1 41.1.1.2
 ip nhrp map multicast 41.1.1.2
 ip nhrp network-id 2
 ip nhrp nhs 10.2.123.1
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 2
 tunnel protection ipsec profile IPSEC_PROF shared
!
interface Ethernet0/0
 description ***INTERNET LINK***
 ip address 41.1.1.6 255.255.255.252
!
router eigrp 10
 network 10.1.123.0 0.0.0.255
 network 10.2.123.0 0.0.0.255
 network 10.10.30.0 0.0.0.255
!
! Default route to ISP
ip route 0.0.0.0 0.0.0.0 41.1.1.5

Let’s now look at some verification commands. When both the primary and backup links are up, both tunnels will also be up and EIGRP adjacencies will be formed across both tunnels:

However, because of the adjusted delay values, routes advertised over Tunnel1 will be preferred even though both paths will be in the EIGRP topology table:

To test redundancy, we can ping continuously between the HQ LAN and LAN on one of the spoke routers. While running the ping, we will shut down the Hub router’s link to the primary ISP and see if the ping continues.

The part highlighted in red shows when the primary link on the hub router was shut down. As you can see, the EIGRP adjacencies went down and the ping failed. After the EIGRP adjacencies were reestablished, the ping was successful again.

This configuration may not be ideal in all cases and we will look at some of the issues with this configuration in the next article.

Summary

In this article, we have discussed how to configure DMVPN where the Hub router has dual ISP links but the remote sites have only one ISP link. We configured multiple mGRE tunnel interfaces on both the hub and spoke routers and also used IP SLA on the hub router to detect when to switch to the backup link. Finally, we adjusted the delay values on the primary DMVPN cloud so that EIGRP routes advertised via that cloud will be preferred over those advertised via the secondary DMVPN cloud.

In the next article, we will look at certain issues with this design/configuration and look at another solution to this design. I hope you have found this article insightful.

References and Further reading