Hi everyone, and welcome back to this article on our network reliability series. In the last article, we started examining load balancing across multiple paths on the network. We explored the role of packet switching on the load balancing methods that is chosen by the router. In this post, we will take that further by exploring unequal cost load balancing across redundant paths.

With routing protocols, the internal algorithm that is used to calculate the metric influences which paths are considered redundant in a network. Let us consider the diagram below:

In this diagram, R3 has two paths to the 192.168.12.0/24, a fast ethernet link through R1, and a serial link through R2. If we run RIP as the routing protocol across R1/R2 and R3, then the routing table on R3 would look like:

From the routing table, we can see two routes to 192.168.12.0/24, with the same administrative distance and metric, each through R1 (FastEthernet 0/0) abd R2 (Serial 0/0). This is because RIP uses hop count as the metric and in this case, since R1 and R2 have the same hop count, they are determined to be equal and both routes are installed in the routing table (regardless of bandwidths of the link).

Since both routes are in the routing table with the same metric, they are treated alike and as such, they would have the same traffic share as seen below. So in this case, the traffic is shared in a 1:1 ratio, despite the fact that R1 has more capacity than R2.

RIP does NOT support unequal cost load balancing, and does not take the bandwidth of the link into consideration, so this is the usual effect. In some cases, it might be desirable for the network administrator to increase the metric of the slower link, so as to force the router to prefer the better link, but this means all the traffic would pass through just one link.

Similarly, R1 and R2 would have two routes in their routing table for 192.168.23.0 and 192.168.13.0 respectively. So what if we change the routing protocol to OSPF? What would be the difference? Let’s take a look at the routing table of R3 with OSPF running:

Now, we have only one route to 192.168.12.0 and it has a metric of 20. This is the combined cost of the route advertised by R1 (10) and the cost of R2’s fast ethernet 0/0 interface.

Note: The fast ethernet interfaces are running at 10Mb/s and so the OSPF metric is calculated as 10 (since the auto reference bandwidth is 10). We can see this from the output of the show interface command.

So what about R2? Is R2 sending a route to R1? Why are we not installing it in the routing table? From the output above, we can see that R3 is also forming OSPF neighbors with R2. And if we show the OSPF database for network LSAs originated by R2’s OSPF router ID (192.168.23.2), we see that it is indeed advertising the 192.168.12.0/24 Link.

So why is R3 not installing it in the routing table? The answer is in the OSPF cost. Let us see the output for show ip OSPF interface s0/0:

From the output above, we can see that the cost of the serial interface is 64. As such the metric of the route is the advertised metric (10) plus the cost of the link (64) which is 74. Compared to the metric of the path through R1 (20), this path is less preferable and so it is not installed in the routing table.

So we have seen that for OSPF, the bandwidth of the links are considered in the metric calculation (unlike RIP) and in this scenario, the two links cannot be installed in the routing table.

Again, in some cases, the network administrator might want to force the two routes to be installed in the routing table. To do this, we need to trick the router into believing that the cost of both links (F0/0 and s0/0) on R3 are the same. We can do this either by increasing the cost of the F0/0 link, or reducing the cost of the serial link. I like to err on the side of caution (so I would increase the cost) here.

After increasing the cost of the F0/0 interface, both routes are now installed in the routing table with an OSPF metric of 74.

We are back in the same condition as RIP, the two routes are installed with the same metric (because we forced it) and now the router would treat both links as equal, even though they are not. The traffic would be load balanced in the ration 1:1. This is also because OSPF only supports equal cost load balancing.

Tip: You can also change the OSPF cost of an interface by changing its bandwidth and letting the automatic cost reference bandwidth calculation take place. The problem with this is that other routing policies might be referring to the bandwidth (For example, QOS) and you would have changed those too. The safe way to change OSPF routing policies is to INCREASE the cost of the less desirable link.

So what happens if we use EIGRP as our routing protocol? Let us disable OSPF and run EIGRP here on all three routers:

no router ospf 1

router eigrp 100

network 192.168.0.0 0.0.255.255

no auto-summary

Now, let’s take a look at R3’s routing table:

Here, we have EIGRP route (D stands for EIGRP) with a metric of 307200 with R1’s IP address as its next hop. However, if we show EIGRP neighbors, we can see that both R1 and R2 are EIGRP neighbors with R3.

So why are we not seeing the route from R3? We can take a look at the topology table for more information. If we look in the topology table for the 192.168.12.0 network, we see:

The output shows us that we actually receive two routes but the one from 192.168.13.1 has a composite metric of 307200 while the one from 192.168.23.2 has a composite metric of 2195456. Since the second route has a higher metric (almost 7 times more), it is less preferred and that’s why it does not make it into the routing table.

In a way, this behavior is similar to OSPF because it takes the bandwidth (and many other factors) into consideration. One way to force the router into installing both routes is to make the metrics the same. However, unlike OSPF, we cannot just assign an EIGRP metric to an interface, we need to change the characteristics of that interface itself. But that is not advisable (for the same reasons mentioned earlier). Another way to do this would be to use the “offset-list” command to increase the metric of the routes received from interface fast ethernet 0/0 so that it can match the high metric of the one from the serial interface but that would only trick the router into believing that the costs are equal.

EIGRP Unequal Cost Load Balancing – Variance

Unlike RIP and OSPF, EIGRP supports unequal cost load balancing. So we do not have to trick the router into believing they are the same metric. We just need to configure the router to install multiple unequal paths. And we do this using the variance command:

The variance command allows you to specify a multiple, which is applied to the best route (successor route). If there are other feasible successors (for more information on feasibility condition, see ? ) with a composite metric which is less than the product of the multiple and the successor route, then they are also installed in the routing table. In this case, I specified eight as the multiple. Since 8* 307200 (2457600) is greater than 2194456, then the route from R2 is also installed in the routing table.

Notice that the metric of the routes do not change, this makes it a lot easier to load balance across the links. If we show ip route for the 192.168.12.0/24 network:

We see that the load sharing ratio is 120:17. This means that the F0/0 link carries about 7.05 times more traffic than the S0/0 link and this allows more efficient use of the links (as opposed to forcing them to share the traffic load equally).

You can specify the maximum amount of multiple paths (from 1 to 16) that can be installed in the routing table using maximum-paths command under the eigrp router process. The default is 4 paths.

In this article, we examined how three routing protocols treat links with unequal paths on the network. We saw how the internal metric calculation of the routing protocol affects the redundancy and load sharing of the network and we explored the EIGRP unequal cost load balancing using the variance command.

That wraps up the routing aspect of our network reliability series. In the next article, we will be looking at redundancy in firewalls. As usual, if you have any thoughts or questions, please feel free to use the comments section. Thank you for reading and I look forward to writing the next article soon!