In the first article in this series, we starting looking at a scenario for redundancy in DMVPN. In that article, we used the diagram shown below:

We configured two different DMVPN clouds and used multiple mGRE tunnels on both the Hub and the spokes.

Note: We could have used point-to-point GRE tunnels on the spokes but this would not allow dynamic spoke-to-spoke tunnels.

We also configured IP SLA on the Hub router so that it can detect when to switch over to the backup ISP link. Finally, we used EIGRP to control the path that should be preferred when both DMVPN clouds are up.

CCNA Training – Resources (Intense)

At the end of that article, I mentioned that there are some issues with the configuration and we will now discuss those issues. The first has to do with our EIGRP configuration: because we are using the same EIGRP process to advertise both tunnel subnets, there is the possibility that a router will still learn about the tunnel subnet that has gone down via EIGRP.

For example, when both links on the Hub router are operational, look at the EIGRP routes in the IP routing table of that router:

Also notice that the Tunnel1 subnet (10.1.123.0/24) is seen as a connected route:

Now, if the primary interface on the Hub router goes down, the Tunnel1 interface will also go down meaning it will not be seen as a connected route in the routing table. Because of this, the hub router will now install the 10.1.123.0/24 route as EIGRP routes advertised by the spokes.

There are several ways around this issue like filtering which routes are advertised to different EIGRP neighbors or just using separate EIGRP processes each DMVPN cloud. For example, on the hub router, we could have a configuration like this:

no router eigrp 10
router eigrp 1
 network 10.1.123.0 0.0.0.255
 network 10.10.10.0 0.0.0.255
router eigrp 2
 network 10.2.123.0 0.0.0.255
 network 10.10.10.0 0.0.0.255
!
interface Tunnel1
 no ip next-hop-self eigrp 1
 no ip split-horizon eigrp 1
interface Tunnel2
 no ip next-hop-self eigrp 2
 no ip split-horizon eigrp 2

With this configuration, routes advertised via the primary DMVPN cloud will still be preferred over the ones advertised via the backup cloud. Also, the 10.1.123.0/24 route will not be learnt via Tunnel2 and the 10.2.123.0/24 route will not be learnt via Tunnel1.

Another issue with our configuration from the previous article deals with having multiple default routes on the hub router. For example, we are using IP SLA to track the primary ISP’s IP address; if something happens inside the ISP’s network that doesn’t affect the tracked IP address, the hub will not failover to the backup link and routing in the DMVPN clouds will fail.

To address this issue, we can configure VRF-Lite (Virtual Routing and Forwarding Lite) on the hub router such that we put the backup ISP link in a separate VRF from the primary ISP link. In that case, we can configure two default routes on the router: one via the primary ISP and the other via the backup ISP link (in its own VRF).

For this configuration, we have two options:

  1. We can create two VRFs for each ISP
  2. We can create only one VRF for the backup ISP and leave the primary ISP in the default (global) VRF.

The option you choose will depend on what you want to achieve and if you are designing the network from scratch (2 VRFs) or if you are altering an existing design (one VRF).

The configuration on the spokes will not change but the configuration on the HUB router is as follows:

no track 1 ip sla 1 reachability
no ip sla schedule 1 life forever start-time now
no ip sla 1
!
ip vrf ISP2
 description ***VRF FOR ISP2***
 rd 1:2
!
interface Ethernet0/0
 description ***PRIMARY LINK***
 ip address 192.0.2.2 255.255.255.252
!
interface Ethernet0/1
 description ***BACKUP LINK***
 ip vrf forwarding ISP2
 ip address 41.1.1.2 255.255.255.252
!
interface Ethernet0/2
 description ***LAN***
 ip address 10.10.10.1 255.255.255.0
!
crypto isakmp policy 10
 hash md5
 authentication pre-share
! Enable DPD
crypto isakmp keepalive 10 periodic
! PSKs for Global VRF. You can use wildcard PSK (0.0.0.0)
crypto isakmp key cisco address 192.0.2.6
crypto isakmp key cisco address 41.1.1.6
!
! PSKs for ISP VRF. You can use wildcard PSK (0.0.0.0)
crypto keyring ISP2_PSK vrf ISP2
  pre-shared-key address 192.0.2.6 key cisco
  pre-shared-key address 41.1.1.6 key cisco
!
crypto ipsec transform-set TRANS_SET esp-3des esp-md5-hmac
 mode tunnel
!
crypto ipsec profile IPSEC_PROF
 set transform-set TRANS_SET
!
interface Tunnel1
 description ***PRIMARY DMVPN CLOUD***
 ip address 10.1.123.1 255.255.255.0
 no ip redirects
 no ip next-hop-self eigrp 1
 no ip split-horizon eigrp 1
 ip nhrp authentication cisco
 ip nhrp map multicast dynamic
 ip nhrp network-id 1
 delay 1000
 tunnel source Ethernet0/0
 tunnel mode gre multipoint
 tunnel key 1
 tunnel protection ipsec profile IPSEC_PROF
!
interface Tunnel2
 description ***BACKUP DMVPN CLOUD***
 ip address 10.2.123.1 255.255.255.0
 no ip redirects
 no ip next-hop-self eigrp 2
 no ip split-horizon eigrp 2
 ip nhrp authentication cisco
 ip nhrp map multicast dynamic
 ip nhrp network-id 2
 tunnel source Ethernet0/1
 tunnel mode gre multipoint
 tunnel key 2
 tunnel vrf ISP2
 tunnel protection ipsec profile IPSEC_PROF
!
router eigrp 1
 network 10.1.123.0 0.0.0.255
 network 10.10.10.0 0.0.0.255
router eigrp 2
 network 10.2.123.0 0.0.0.255
 network 10.10.10.0 0.0.0.255
!
no ip route 0.0.0.0 0.0.0.0 192.0.2.1 track 1
no ip route 0.0.0.0 0.0.0.0 41.1.1.1 10
ip route 0.0.0.0 0.0.0.0 192.0.2.1
ip route vrf ISP2 0.0.0.0 0.0.0.0 41.1.1.1
!

Let me quickly talk about the changes to the configuration. First, I removed the IP SLA configuration. I then created a VRF called ISP2 and added the backup ISP interface (Ethernet0/1) to that VRF. The crypto commands are fairly unchanged except that we created PSKs for the spoke routers in the ISP2 VRF and also enabled Dead Peer Detection (DPD).

The configuration on Tunnel1 has not changed but on Tunnel2, we have added the tunnel vrf ISP2 command – the VRF that applies to encapsulated GRE packets. Finally, we removed the static routes we had before and then configured one in the global VRF and another in the ISP2 VRF.

When both ISP links are up, the two DMVPN clouds will also be up but routes via Tunnel1 will be preferred (because of the adjusted delay values):

When the primary ISP link goes down, the routes advertised via the backup DMVPN cloud are installed into the global routing table:

When the primary link comes back up, the routes via Tunnel1 are preferred again:

Summary

This brings us to the end of this article where we have considered alternatives to the DMVPN design of a single hub with dual ISP links. We configured Front-door VRF (fVRF) so that we can have two default routes to each ISP.

In the next article, we will look at another DMVPN scenario. I hope you have found this article insightful.

References and Further reading