In this article, we will discuss Bidirectional Forwarding Detection (BFD) on IOS. Specifically, we will cover:

  • Introductory concepts on BFD
  • BFD configuration for static routes
  • BFD configuration for routing protocols (for OSPF in particular)

Bidirectional Forwarding Detection provides a method of quickly detecting failures in the forwarding path between two neighbour routers. BFD is configured both on the interface level and routing protocol level, and it can be configured for static routes as well.

CCNA Training – Resources (Intense)

Before a BFD session is created, it needs to be configured on both peers. Once BFD is enabled on the interface level and the appropriate routing protocol level, or for the static route, the BFD session is created, the timers are negotiated and the BFD peers will send BFD control packets to each other using the negotiated interval.

BFD operates independently from the routing protocol used, interface type or encapsulation and it provides a mechanism for fast network convergence in case of failure. The detection of the failure can be sub second.

Let’s look at a use case where an OSPF adjacency is established through a switch in a transparent way for OSPF. In case of failure of that switch, OSPF will rely on the OSPF timers to expire before declaring the OSFP adjacency down.

In case BFD is used, once the local BFD process notices that the BFD peer is down, it notifies the local OSPF process, which then tears down the adjacency speeding up the network convergence.

Basically, the routing protocols rely on BFD for faster convergence in case of failure detection.

When BFD parameters are set, you need to know a few things:

  • interval –the interval at which BFD packets will be sent to BFD peers
  • min_rx –the interval at which BFD packets are expected from the BFD peers
  • multiplier – the number of BFD packets that must be lost before the BFD session is brought down

Let’s see how BFD is configured for static routes. This is our topology:

We will configure a BFD session for the static route on R1 towards the Loopback IP address of R2 and vice versa. This is the configuration on R1. A similar configuration has to be done on R2 as well.

interface GigabitEthernet0/0
ip address 10.10.12.1 255.255.255.0
bfd interval 500 min_rx 500 multiplier 3
!
ip route static bfd GigabitEthernet0/0 10.10.12.2
ip route 1.1.1.2 255.255.255.255 GigabitEthernet0/0 10.10.12.2

This is the verification that the BFD session is up on R1:

R1#show bfd neighbors 

IPv4 Sessions
NeighAddr                 LD/RD         RH/RS     State     Int
10.10.12.2                 2/7          Up        Up        Gi0/0
R1#

Because the BFD session is up, the static route shows up in the routing table:

      1.0.0.0/32 is subnetted, 2 subnets
S        1.1.1.2 [1/0] via 10.10.12.2, GigabitEthernet0/0
R1#

So how do we test that this is working? When I removed the IP address from the interface of R2, the BFD session went down and the static route was removed from the routing table:

R1#
*May 21 02:39:17.652: %BFDFSM-6-BFD_SESS_DOWN: BFD-SYSLOG: BFD session ld:2 handle:1,is going Down Reason: ECHO FAILURE
R1#show bfd neighbors 

IPv4 Sessions
NeighAddr                     LD/RD         RH/RS     State     Int
10.10.12.2                     2/0          Down      Down      Gi0/0
R1#

Let’s see how BFD can help increase the convergence of a routing protocol. In this case, we will use OSPF.

This is our topology:

In this scenario, R1, R2 and R3 are all part of OSPF area 0. We will simulate a failure caused by L2_SW in which the interface on R1 will not go down, hence OSPF will need to wait for the dead timer interval to expire, which is 40 seconds by default, so it can switch over the alternate path through R3.

As before, the BFD parameters have to be first configured on the interface:

R1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R1(config)#int gi0/0
R1(config-if)#bfd interval 500 min_rx 500 multiplier 3
R1(config-if)#
*May 20 20:02:47.907: %BFD-6-BFD_IF_CONFIGURE: BFD-SYSLOG: bfd config apply, idb:GigabitEthernet0/0 
R1(config-if)#exit
R1(config)#

After that, we need to configure under OSPF what interfaces will use BFD. In our case, we will specify that all of the interfaces whose subnets are advertised in OSPF will use BFD:

R1(config)#
R1(config)#router ospf 1
R1(config-router)#bfd all-interfaces 
R1(config-router)#exit
R1(config)#
*May 20 20:03:06.523: %BFD-6-BFD_SESS_CREATED: BFD-SYSLOG: bfd_session_created, neigh 10.10.12.2 proc:OSPF, idb:GigabitEthernet0/0 handle:1 act
R1(config)#

Let’s check the status of BFD sessions:

R1#show bfd summary     

                    Session          Up          Down

Total                     1           0             1
R1#

We have one session and it’s down. To find more information about the session, you can use this command:

R1#show bfd neighbors         

IPv4 Sessions
NeighAddr                 LD/RD         RH/RS     State     Int
10.10.12.2                 1/1          AdminDown Down      Gi0/0
R1#

A similar configuration will be done on R2 and R3. Let’s see what is happening on R1 after BFD was configured on R2. We can see that the BFD session came up:

R1#
*May 20 20:07:43.930: %BFDFSM-6-BFD_SESS_UP: BFD-SYSLOG: BFD session ld:1 handle:1 is going UP
R1#

After this, the BFD session came up:

R1#show bfd neighbors 

IPv4 Sessions
NeighAddr                 LD/RD         RH/RS     State     Int
10.10.12.2                 1/1          Up        Up        Gi0/0
R1#

After all the routers from the topology were configured in a similar way, each one of them should now have two BFD sessions, one for each interface whose subnet was advertised in OSPF:

R1#show bfd neighbors         

IPv4 Sessions
NeighAddr                 LD/RD         RH/RS     State     Int
10.10.12.2                 1/1          Up        Up        Gi0/0
10.10.13.3                 2/2          Up        Up        Gi0/1
R1#

And the OSPF adjacencies:

R1#show ip ospf neighbor 

Neighbor ID     Pri   State        Dead Time   Address         Interface
10.10.23.3        1   FULL/BDR     00:00:32    10.10.13.3      GigabitEthernet0/1
1.1.1.2           1   FULL/BDR     00:00:35    10.10.12.2      GigabitEthernet0/0
R1#

For the next test, we will need to know how R1 is reaching the R2 Loopback IP address, 1.1.1.2:

R1#show ip route 1.1.1.2
Routing entry for 1.1.1.2/32
  Known via "ospf 1", distance 110, metric 11, type intra area
  Last update from 10.10.12.2 on GigabitEthernet0/0, 00:46:59 ago
  Routing Descriptor Blocks:
  * 10.10.12.2, from 1.1.1.2, 00:46:59 ago, via GigabitEthernet0/0
      Route metric is 11, traffic share count is 1
R1#

Now let’s simulate a failure at L2_SW that will not cause R1 to bring down its interface and see how fast OSPF will re-converge.

I started a ping from R1 to R2 just before the interface on L2_SW towards R1 came down:

R1#ping 1.1.1.2 source 1.1.1.1 repeat 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 1.1.1.2, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1 
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!...!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!
Success rate is 99 percent (997/1000), round-trip min/avg/max = 1/4/11 ms
R1#

This can be seen in the logs when the ICMP packets are lost. As you can see, the BFD session went down first due to the timeout of 1.5 seconds (3 intervals of 500ms) which then leads to the OSPF adjacency being brought down:

R1#
*May 20 20:51:55.492: %BFDFSM-6-BFD_SESS_DOWN: BFD-SYSLOG: BFD session ld:1 handle:1,is going Down Reason: ECHO FAILURE
*May 20 20:51:55.493: %BFD-6-BFD_SESS_DESTROYED: BFD-SYSLOG: bfd_session_destroyed,  ld:1 neigh proc:OSPF, handle:1 act
R1#
*May 20 20:51:55.493: %OSPF-5-ADJCHG: Process 1, Nbr 1.1.1.2 on GigabitEthernet0/0 from FULL to DOWN, Neighbor Down: BFD node down
R1#

If BFD hadn’t been configured, you would see different results for ICMP traffic:

R1#ping 1.1.1.2 source 1.1.1.1 repeat 1000 
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 1.1.1.2, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1 
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!......................................
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!
Success rate is 96 percent (961/1000), round-trip min/avg/max = 1/4/9 ms
R1#

Actually, the packets were lost for about 40 seconds, which is the default dead interval timer.

There are a few other options that you can configure on BFD. You have the option to configure a BFD template with specific BFD parameters and then apply it to specific interfaces so that you don’t have to specify the same parameters every time.

For our test, we will configure different timers and authentication for the BFD sessions between R1 and R2 and R1 and R3.

First let’s see the current timers for both BFD sessions on R1. For the session with R2, the output is cut so that only the interval and the multiplier is seen:

NeighAddr                    LD/RD         RH/RS     State     Int
10.10.12.2                    5/5          Up        Up        Gi0/0
Session state is UP and using echo function with 500 ms interval.
OurAddr: 10.10.12.1     
MinTxInt: 1000000, MinRxInt: 1000000, Multiplier: 3

For the session with R3:

NeighAddr                    LD/RD         RH/RS     State     Int
10.10.13.3                    2/2          Up        Up        Gi0/1
Session state is UP and using echo function with 500 ms interval.
OurAddr: 10.10.13.1     
MinTxInt: 1000000, MinRxInt: 1000000, Multiplier: 3

Let’s configure a BFD template for each these two neighbours with different timers and then add authentication.

This would be the BFD template and key chain for R2 on R1. The same is configured on R2:

bfd-template single-hop BFD_R1-R2
 interval min-tx 500 min-rx 500 multiplier 3
 authentication md5 keychain R1-R2
 echo

key chain R1-R2
 key 10
  key-string R1-R2

The template is then applied under the interface through which R1 is connected to R2.

This is the configuration for the BFD template on R1 to R2:

bfd-template single-hop BFD_R1-R3
  interval min-tx 700 min-rx 700 multiplier 4
  authentication md5 keychain R1-R3
  echo
  
key chain R1-R3
 key 10
  key-string R1-R3  

The templates are next applied under the interfaces towards R2 and R3:

R1(config)#interface gi0/0
R1(config-if)#bfd template R1-R2 
R1(config-if)#interface gi0/1
R1(config-if)#bfd template R1-R3
R1(config-if)#end
R1#

A similar procedure/configuration is applied on both R2 and R3 on their interfaces towards R1.

Now, let’s check the status of the BFD sessions and confirm that the timers are different and that we are using authentication. Again, the output is cut to reflect only the changes. This is the BFD session with R2:

NeighAddr                    LD/RD         RH/RS     State     Int
10.10.12.2                    1/6          Up        Up        Gi0/0
Session state is UP and using echo function with 500 ms interval.
Session Host: Software
OurAddr: 10.10.12.1     
MinTxInt: 1000000, MinRxInt: 1000000, Multiplier: 3
Received MinRxInt: 1000000, Received Multiplier: 3
Template: BFD_R1-R2
Authentication(Type/Keychain): md5/R1-R2
 last_tx_auth_seq: 12  last_rx_auth_seq 10

And this is the BFD session with R3:

NeighAddr                    LD/RD         RH/RS     State     Int
10.10.13.3                    6/3          Up        Up        Gi0/1
Session state is UP and using echo function with 700 ms interval.
Session Host: Software
OurAddr: 10.10.13.1     
MinTxInt: 1000000, MinRxInt: 1000000, Multiplier: 4
Received MinRxInt: 1000000, Received Multiplier: 4
Template: BFD_R1-R3
Authentication(Type/Keychain): md5/R1-R3
 last_tx_auth_seq: 12  last_rx_auth_seq 10

How to detect if the authentication is causing the BFD session to come up? I configured a different key on R2 and the BFD session between R1 and R2 went down:

R1#show bfd neighbors 

IPv4 Sessions
NeighAddr                    LD/RD         RH/RS     State     Int
10.10.12.2                    1/0          Down      Down      Gi0/0
10.10.13.3                    6/3          Up        Up        Gi0/1
R1#

There will be a counter that increases to show that the authentication is failing (in the output below it has a value of 55):

R1#show bfd drops       
BFD Drop Statistics
			IPV4 	IPV6	IPV4-M	IPV6-M	MPLS_PW MPLS_TP_LSP
Invalid TTL 		0	 0	 0	 0	 0	 0
BFD Not Configured 	0	 0	 0	 0	 0	 0
No BFD Adjacency 	0	 0	 0	 0	 0	 0
Invalid Header Bits 	0	 0	 0	 0	 0	 0
Invalid Discriminator 	0	 0	 0	 0	 0	 0
Session AdminDown 	0	 0	 0	 0	 0	 0
Authen invalid BFD ver 	0	 0	 0	 0	 0	 0
Authen invalid len 	0	 0	 0	 0	 0	 0
Authen invalid seq 	0	 0	 0	 0	 0	 0
Authen failed       	55	 0	 0	 0	 0	 0
R1#

And that’s how to configure most of the BFD features on Cisco IOS. Of course there are details that are beyond the scope of this article and they are related to advanced topics or to specifics of the Cisco platforms where BFD can be configured.

You now know what BFD is and how to configure it for static routes and routing protocols. Although we only used OSPF, the configuration is similar for other protocols.

However, this should be enough to understand how BFD operates and how to configure it.

References: