Hello and welcome to this article in our network reliability series. In the last article, we examined active/standby and active/active redundancy in Layer 3 routing protocols. You can review the post here. In this article, we will dive deeper into active redundancy by examining how traffic is routed across multiple paths in a Cisco Router. We will take a deep dive into the command line, so I would encourage you to set up your lab gears or simulation software and practice for yourself! So let’s get started.

Equal Cost Load Balancing

In the previous post in this series, we established that for two routes to be installed in the routing table, they MUST have the same administrative distance (be from the same routing protocol) and have the same metric. So how does a router send traffic across two equal links? It shares the load in a round robin fashion. Let us take an example below:

In the diagram above, R3 has two paths to the 192.168.12.0/24 network, one through R1 (192.168.13.1) and the other through R2 (192.168.23.2). So let’s configure static routing on R1 for the two networks.

Now, let us show the routing table:

Here, we can see that the network 192.168.12.0/24 has two routes (with the same administrative distance and metric) through the two different paths. To further dig into this we can use the show ip route <network> command.

From the output we can see that the network 192.168.12.0 is reachable through the two static routes. One important information in this output is the traffic share count. This shows that the traffic sent across both links is in the ratio 1:1. This means that, in theory, half of the traffic is sent through 192.168.23.2 and the other half is sent through 192.168.13.1. There are two kinds of load sharing on Cisco routers:

  1. Per-Packet Load balancing: in this kind of load sharing, each packet is treated in isolation and load balanced across the link. So if we try to ping 192.168.12.1 (send five 100 bytes to the destination), each packet of the ping is treated differently; the first ping is sent via the first link and the second via the second link. These pings are sent in an alternating fashion until the last packet. Asides being processor intensive, per packet load balancing can lead to packets arriving out of order when the multiple links have variable delays and this can slow down the network rather than make it faster. Basically, this means you might reduce the reliability of the network, rather than increasing it.
  2. Per destination load balancing: With per destination load balancing, the same link is used to send packets that are going to the same destination. In this case, traffic destined for 192.168.2.1 would be consistently sent through the same interface and traffic destined for the next ip address would be sent via the next interface. The issue with this kind of load balancing is that if there is a lot of traffic destined for a particular destination flow, it can lead to unequal utilization of the link. But it is more reliable and less processor intensive.

So what determines the kind of load balancing that is employed on the router? The load balancing method depends on the kind of packet switching that is used on the router. There are three kinds of packet switching:

  1. Process Switching: With process switching every single packet is switched by the router independently. This means that for every packet, the routing table is consulted to determine the path and with multiple best paths, the least used interface is chosen to pass the next traffic. This kind of switching is very processor intensive but it can be used to ensure equal load balancing across the links.
  2. Fast Switching: In this case, when a packet is forwarded to a destination, a cache is made for that destination and this prevents the router from looking up the routing table for subsequent forwarding decisions to that destination. If the cache entry for a destination expires (or if there was no entry in the first place), the router looks up the routing table for the first packet going to that destination and caches the entry. Since the entry is cached, the same interface is used for packets destined for the same destination (per destination load balancing).

By default interfaces that support fast switching have them enabled. This means that by default, these interfaces support per destination load balancing. To support per packet load balancing, you can disable fast switching (enable process switching) using the command:

no ip route-cache

Note: Technically, you can also have per-destination (or per packet) load balancing with process switching. But you can only have per destination load balancing with fast switching (for reasons explained earlier).

Cisco Express Forwarding

In newer Cisco routers, there is a new kind of packet switching called Cisco Express Forwarding. With CEF, there are two major improvements.

  1. The routing table is copied (cached) to a new table called the Forwarding Information Base (FIB). Basically, lookups are now faster.
  2. An adjacency table is created. The adjacency table contains Layer 2 information (MAC addresses in the case of Ethernet) of the next-hops. This increases speed of the switching because since the ARP (Looking up corresponding MAC addresses for IP addresses) process can be skipped and packets can be switched directly to the right interface based on the information in the FIB and Adjacency table.

In simple terms, CEF gives the flexibility of process switching (you can decide to choose per packet load balancing or per destination load balancing) and at the same time, it improves on the speed of fast switching (with its FIB and adjacency tables).

So let’s see these on the command line, shall we?

First, let us try to ping using the default state.

Here we try to ping 192.168.12.1 and we can see that the pings are all successful. Let us do a “sh ip route” for 192.168.12.1

We see that the next path (marked with a *) is the 192.168.13.1 (Link through F0/0) link. Now, let us see the FIB and Adjacency table.

We can see that 192.168.12.0 has the next hops (from the routing table) and their corresponding L2 Interfaces.

Now, let’s see what happens when we try to use per packet load balancing. We can configure per packet load balancing by enabling process switching. To do this, we can disable CEF and disable fast switching using “no ip route-cache”

Now let us ping 192.168.12.1 and see what happens.

It looks like every other packet is getting a reply. And that is weird. First let’s check the status of the interface:

We can see that fast switching and CEF are disabled. So with per packet load balancing, we are sending one packet through F0/0 (directly to R1) and the other through F0/1 (through R2). Ordinarily this should not be a problem. But the interface is selected determines the source IP address of the icmp packet.

So if we go over to R1 to see which pings are being received, let’s turn on icmp debugs here:

We can see that the echo replies from 192.168.13.3 (R3’s F0/0 address) and 192.168.23.3 (R3’s F0/1 address). And that is the problem. R1 does not have a route to 192.168.23.3.

So to fix this issue, we need to create a route for that subnet (192.168.23.0/24) on R1. We can either point the route through R2 or directly to R1. And for the sake of redundancy, we would do both.

Now what happens when we try to ping again on R3?

Success! And this is not because all the traffic is being sent through one link, but because we have fixed our routing to accommodate both links.

So far, we have discussed these concepts using static routes. So what would change if we were to use routing protocols instead? In most cases nothing. The concept of packet switching and load balancing is independent of how the routing protocol is populated (whether statically or dynamically). However, there is one scenario where the routing protocol can influence load balancing differently, and that is in Unequal Cost Load balancing. We would explore this further in the next article.

Whew! In this article, we explored load balancing across multiple paths with equal costs in the routing table. We started out by discussing the kinds of equal cost load balancing and then we explored the different kinds of packet switching and how they affect load balancing. Finally, we took a deep dive into the command line to see how these concepts really work and how our routing can easily break our redundancy and load balancing.

In the next article, we will explore unequal cost load balancing in routing protocols. As usual, if you have any comments or questions, feel free to use the comments section to air your thoughts and opinions. Thank you very much for reading and I look forward to writing the next article. See you soon.