Multicast Source Discovery Protocol (MSDP) is used to exchange multicast source information between two BGP-enabled PIM domains. MSDP is not limited for use with BGP only; it can also be used with the underlying IGP. This is typically configured on multicast boundary routers which restrict multicast information flooding to another domain.
With MSDP, you can control what multicast source information you can share to another domain through the use of filters. MSDP works by communicating information on the multicast sources and groups available in its own domain to other peers. In case there is a receiver of a particular multicast group in another domain, the MSDP router where the receiver is informs the MSDP router on the source’s domain that there is a receiver that wants to join and receive traffic for that group. MSDP peer routers share their mroute table with each other.
MSDP can also be used for Anycast RP implementation. Anycast RP is a design approach where two RPs in the same or different domains share the same IP address. The PIM enabled routers will register to the nearest RP based on routing metrics. In case the nearest RP fails, the routers will immediately register to the other RPs that share the same address. This design calls for faster reconnection and provides a form of resiliency.
In our lab we will be completing the following tasks to learn more about the basics of MSDP. OSPF and basic BGP have already been configured. Below are the tasks and the network diagram.
- Configure PIM on all interfaces. R1 and R2 should be the RP and BSR for each of their respective domains. Use BSR to propagate RP information with R1 and R2 configured to not leak information towards the other domain.
- Configure R5 and R6’s Loopback0 to join multicast groups 239.5.5.5 and 239.6.6.6, respectively. Source multicast pings from R3 and R4 respectively for each of the multicast groups. This is to test intra-domain if multicast is working through intra-domain sources.
- Configure R1 and R2 as MSDP. Connection and Originator-ID should use Loopback0. Make sure multicast pings of 239.5.5.5 from R4 will get a reply from R5; similarly, a ping from 239.6.6.6 from R3 should receive a reply from R6. Verify MSDP Sa-Cache to show the valid sources from each of the domains.
-
Anycast RP configuration: Configure Loopback100 in R1 and R2 with IP address 100.100.100.100/32. Announce this prefix in BGP on both routers. Configure all devices on both domains with this IP as static RP for multicast group 239.100.100.100. Shutdown R2’s Loopback100 afterwards. All hosts in AS 65002 should register to R1 as the RP and they should be able to reach the multicast sources in AS 65001.
Task 1. Configure PIM on all interfaces. R1 and R2 should be the RP and BSR for each of their respective domains. Use BSR to propagate RP information with R1 and R2 configured to not leak information towards the other domain.
R1(config-if)#ip multicast-routing R1(config)#int fa0/0 R1(config-if)#ip pim sparse-mode R1(config-if)#ip pim bsr-border R1(config-if)#int fa0/1 R1(config-if)#ip pim sparse-mode R1(config-if)#int fa1/0 R1(config-if)#ip pim sparse-mode R1(config-if)#int l0 R1(config-if)#ip pim sparse-mode R1(config-if)#exit R1(config)#ip pim rp-candidate Loopback0 R1(config)#ip pim bsr Loopback0 0 100 R2(config-if)#ip multicast-routing R2(config)#int fa0/0 R2(config-if)#exit R2(config)#ip multicast-routing R2(config)#int fa0/0 R2(config-if)#ip pim sparse-mode R2(config-if)#ip pim bsr-border R2(config-if)#int fa0/1 R2(config-if)#ip pim sparse-mode R2(config-if)#int fa1/0 R2(config-if)#ip pim sparse-mode R2(config-if)#int l0 R2(config-if)#ip pim sparse-mode R2(config-if)#exit R2(config)#ip pim rp-candidate Loopback0 R2(config)#ip pim bsr Loopback0 0 100 R3(config)#int fa0/1 R3(config-if)#ip pim sparse-mode R3(config-if)#int l0 R3(config-if)#ip pim sparse-mode R4(config)#ip multicast-routing R4(config)#int fa0/1 R4(config-if)#ip pim sparse-mode R4(config-if)#int l0 R4(config-if)#ip pim sparse-mode R5(config)#ip multicast-routing R5(config)#int fa1/0 R5(config-if)#ip pim sparse-mode R5(config-if)#int l0 R5(config-if)#ip pim sparse-mode R6(config)#ip multicast-routing R6(config)#int fa1/0 R6(config-if)#ip pim sparse-mode R6(config-if)#int l0 R6(config-if)#ip pim sparse-mode
Let’s verify if R1 and R2 are chosen as the RP for each of their AS:
R3#sh ip pim rp map PIM Group-to-RP Mappings Group(s) 224.0.0.0/4 RP 1.1.1.1 (?), v2 Info source: 1.1.1.1 (?), via bootstrap, priority 0, holdtime 150 Uptime: 00:01:40, expires: 00:01:49 R4#sh ip pim rp mapping PIM Group-to-RP Mappings Group(s) 224.0.0.0/4 RP 2.2.2.2 (?), v2 Info source: 2.2.2.2 (?), via bootstrap, priority 0, holdtime 150 Uptime: 00:03:53, expires: 00:01:36
Task 2. Configure R5 and R6’s Loopback0 to join multicast groups 239.5.5.5 and 239.6.6.6, respectively. Source multicast pings from R3 and R4 respectively for each of the multicast groups. This is to test intra-domain if multicast is working through intra-domain sources.
R5(config)#int l0 R5(config-if)#ip igmp join-group 239.5.5.5 R6(config-if)#int l0 R6(config-if)#ip igmp join-group 239.6.6.6
To test the multicast traffic:
R3#ping 239.5.5.5 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.5.5.5, timeout is 2 seconds: Reply to request 0 from 15.15.15.5, 128 ms Reply to request 0 from 15.15.15.5, 128 ms R4#ping 239.6.6.6 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.6.6.6, timeout is 2 seconds: Reply to request 0 from 26.26.26.6, 52 ms Reply to request 0 from 26.26.26.6, 184 ms
Task 3. Configure R1 and R2 as MSDP. Connection and Originator-ID should use Loopback0. Make sure multicast pings of 239.5.5.5 from R4 will get a reply from R5; similarly, a ping from 239.6.6.6 from R3 should receive a reply from R6. Verify MSDP Sa-Cache to show the valid sources from each of the domains.
R1(config)#ip msdp peer 2.2.2.2 connect-source Loopback0 R1(config)#ip msdp originator-id Loopback0 R2(config)#ip msdp peer 1.1.1.1 connect-source Loopback0 R2(config)#ip msdp originator-id Loopback0
One thing to take note here is that the connect-source is where the MSDP messages are sourced from, similar to what the BGP update-source does. On the other hand, the Originator-ID is similar to the Router-ID used in the routing protocols. This Originator-ID should be unique in MSDP to make sure there are no conflicts when it comes to using the Anycast RP configuration which we will do later on. If there is no Originator-ID configured, the router automatically selects the highest IP addresses from its loopback interfaces.
Let’s check if MSDP is sharing source active information between R1 and R2. Generate traffic first by doing the test in Task 2.
R1#sh ip msdp sa-cache MSDP Source-Active Cache - 2 entries (4.4.4.4, 239.6.6.6), RP 2.2.2.2, BGP/AS 65002, 00:04:18/00:05:58, Peer 2.2.2.2 (24.24.24.4, 239.6.6.6), RP 2.2.2.2, BGP/AS 65002, 00:04:18/00:05:58, Peer 2.2.2.2 R2#sh ip msdp sa-cache MSDP Source-Active Cache - 1 entries (3.3.3.3, 239.5.5.5), RP 1.1.1.1, BGP/AS 65001, 00:00:56/00:05:44, Peer 1.1.1.1
Now that both R1 and R2 are sharing SA messages, let’s try and test if we can ping R5 through R4 using 239.5.5.5:
R4#ping 239.5.5.5 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.5.5.5, timeout is 2 seconds: .
Ping is unsuccessful. This is due to the fact that there is no unicast route from R4 to R5. In multicast, don’t forget to check if unicast is working. Let’s do a mutual redistribution between OSPF and BGP on R1 and R2 and then check if we have the route from R4 to R5.
R1(config)#router bgp 65001 R1(config-router)#redistribute ospf 1 R1(config-router)#router ospf 1 R1(config-router)#redistribute bgp 65001 subnets R2(config)#router bgp 65002 R2(config-router)#redistribute ospf 1 R2(config-router)#router ospf 1 R2(config-router)#redistribute bgp 65002 subnets
Let’s verify IP routing:
R4#sh ip route 5.5.5.5 Routing entry for 5.5.5.5/32 Known via "ospf 1", distance 110, metric 1 Tag 65001, type extern 2, forward metric 10 Last update from 24.24.24.2 on FastEthernet0/1, 00:01:01 ago Routing Descriptor Blocks: * 24.24.24.2, from 22.22.22.22, 00:01:01 ago, via FastEthernet0/1 Route metric is 1, traffic share count is 1 Route tag 65001 R4#ping 5.5.5.5 source 4.4.4.4 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 5.5.5.5, timeout is 2 seconds: Packet sent with a source address of 4.4.4.4 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 44/57/80 ms
Now to check the multicast traffic from R4 to R5 and R3 to R6:
R4#ping 239.5.5.5 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.5.5.5, timeout is 2 seconds: Reply to request 0 from 15.15.15.5, 108 ms Reply to request 0 from 15.15.15.5, 160 ms Reply to request 0 from 15.15.15.5, 132 ms Reply to request 0 from 15.15.15.5, 132 ms R3#ping 239.6.6.6 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.6.6.6, timeout is 2 seconds: Reply to request 0 from 26.26.26.6, 208 ms Reply to request 0 from 26.26.26.6, 216 ms R1#sh ip mroute 239.5.5.5 IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement, U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel, z - MDT-data group sender, Y - Joined MDT-data group, y - Sending to MDT-data group Outgoing interface flags: H - Hardware switched, A - Assert winner Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 239.5.5.5), 00:51:20/00:03:17, RP 1.1.1.1, flags: S Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:49:16/00:03:17 (4.4.4.4, 239.5.5.5), 00:00:20/00:03:22, flags: MT Incoming interface: FastEthernet0/0, RPF nbr 12.12.12.2 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:00:20/00:03:17 (24.24.24.4, 239.5.5.5), 00:00:22/00:03:20, flags: MT Incoming interface: FastEthernet0/0, RPF nbr 12.12.12.2 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:00:22/00:03:15 R2#sh ip mroute 239.6.6.6 IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement, U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel, z - MDT-data group sender, Y - Joined MDT-data group, y - Sending to MDT-data group Outgoing interface flags: H - Hardware switched, A - Assert winner Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 239.6.6.6), 01:06:20/00:03:20, RP 2.2.2.2, flags: S Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 01:06:20/00:03:20 (3.3.3.3, 239.6.6.6), 00:02:31/00:03:21, flags: MT Incoming interface: FastEthernet0/0, RPF nbr 12.12.12.1 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:02:31/00:03:20 (13.13.13.3, 239.6.6.6), 00:02:33/00:03:19, flags: MT Incoming interface: FastEthernet0/0, RPF nbr 12.12.12.1 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:02:33/00:03:19
In the “show ip mroute” commands above, we can see the MT flag. M here means that the mroute was created through MSDP and proves that MSDP is working properly in our setup.
Task 4. Anycast RP configuration: Configure Loopback100 in R1 and R2 with IP address 100.100.100.100/32. Announce this prefix in BGP on both routers. Configure all devices on both domains with this IP as static RP for multicast group 239.100.100.100. Shutdown R2’s Loopback100 afterwards. All hosts in AS 65002 should register to R1 as the RP and they should be able to reach the multicast sources in AS 65001.
R1(config)#interface Loopback100 R1(config-if)# ip address 100.100.100.100 255.255.255.255 R1(config-if)#! R1(config-if)#router bgp 65001 R1(config-router)# network 100.100.100.100 mask 255.255.255.255 R1(config-if)#interface L100 R1(config-if)#ip pim sparse-mode R2(config)#interface Loopback100 R2(config-if)# ip address 100.100.100.100 255.255.255.255 R2(config-if)#! R2(config-if)#router bgp 65002 R2(config-router)# network 100.100.100.100 mask 255.255.255.255 R2(config)#int l100 R2(config-if)#ip pim sparse-mode R3(config)#access-list 10 permit host 239.100.100.100 R3(config)#ip pim rp-address 100.100.100.100 10 override R4(config)#access-list 10 permit host 239.100.100.100 R4(config)#ip pim rp-address 100.100.100.100 10 override R5(config)#access-list 10 permit host 239.100.100.100 R5(config)#ip pim rp-address 100.100.100.100 10 override R6(config)#access-list 10 permit host 239.100.100.100 R6(config)#ip pim rp-address 100.100.100.100 10 override
Let’s configure R5 and R6 Loopback0 to join 239.100.100.100 and then we will do some testing:
R5(config)#int l0 R5(config-if)#ip igmp join-group 239.100.100.100 R6(config)#int l0 R6(config-if)#ip igmp join-group 239.100.100.100 R3#ping 239.100.100.100 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.100.100.100, timeout is 2 seconds: Reply to request 0 from 15.15.15.5, 140 ms Reply to request 0 from 26.26.26.6, 264 ms Reply to request 0 from 26.26.26.6, 232 ms Reply to request 0 from 15.15.15.5, 204 ms
Through MSDP, even R6, which is in the other domain, is receiving the multicast traffic from R3. Let’s check the mroute table on R1 and R2:
R1#sh ip mroute 239.100.100.100 (*, 239.100.100.100), 00:02:07/00:03:20, RP 100.100.100.100, flags: S Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:02:07/00:03:20 (3.3.3.3, 239.100.100.100), 00:01:14/00:03:22, flags: TA Incoming interface: FastEthernet0/1, RPF nbr 13.13.13.3 Outgoing interface list: FastEthernet0/0, Forward/Sparse, 00:01:14/00:03:15 FastEthernet1/0, Forward/Sparse, 00:01:15/00:03:19 (13.13.13.3, 239.100.100.100), 00:01:15/00:03:21, flags: TA Incoming interface: FastEthernet0/1, RPF nbr 13.13.13.3 Outgoing interface list: FastEthernet0/0, Forward/Sparse, 00:01:15/00:03:14 FastEthernet1/0, Forward/Sparse, 00:01:15/00:03:19 R2#sh ip mroute 239.100.100.100 (*, 239.100.100.100), 00:03:50/00:03:23, RP 100.100.100.100, flags: S Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:03:50/00:02:36 (3.3.3.3, 239.100.100.100), 00:03:10/00:03:28, flags: MT Incoming interface: FastEthernet0/0, RPF nbr 12.12.12.1 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:03:10/00:03:17 (13.13.13.3, 239.100.100.100), 00:03:11/00:03:26, flags: MT Incoming interface: FastEthernet0/0, RPF nbr 12.12.12.1 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:03:11/00:03:15 R2(config)#int l100 R2(config-if)#shut R2(config)#no ip pim rp-candidate Loopback0 R6#ping 100.100.100.100 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 100.100.100.100, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 36/49/80 ms
We can still ping 100.100.100.100 because now our pings are heading towards R1 due to R2 Loopback100 being down. Unicast reachability is there. Let’s try to test it with multicast:
R3#ping 239.100.100.100 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.100.100.100, timeout is 2 seconds: Reply to request 0 from 15.15.15.5, 128 ms Reply to request 0 from 26.26.26.6, 180 ms Reply to request 0 from 26.26.26.6, 172 ms Reply to request 0 from 26.26.26.6, 160 ms Reply to request 0 from 15.15.15.5, 148 ms Reply to request 0 from 15.15.15.5, 140 ms R2#sh ip mroute 239.100.100.100 (*, 239.100.100.100), 00:02:39/00:03:05, RP 100.100.100.100, flags: S Incoming interface: FastEthernet0/0, RPF nbr 12.12.12.1 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:02:39/00:03:05 (3.3.3.3, 239.100.100.100), 00:00:56/00:02:33, flags: Incoming interface: FastEthernet0/0, RPF nbr 12.12.12.1 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:00:57/00:03:03 (13.13.13.3, 239.100.100.100), 00:00:57/00:02:32, flags: Incoming interface: FastEthernet0/0, RPF nbr 12.12.12.1 Outgoing interface list: FastEthernet1/0, Forward/Sparse, 00:00:57/00:03:03 R1#sh ip mroute 239.100.100.100 (*, 239.100.100.100), 00:50:35/00:03:09, RP 100.100.100.100, flags: S Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: FastEthernet0/0, Forward/Sparse, 00:03:23/00:03:04 FastEthernet1/0, Forward/Sparse, 00:50:35/00:03:09 !--------------output truncated----------------------!
The multicast ping worked.
By the way, it is recommended to use static RP when using Anycast RP through MSDP so that RP information is not lost even after losing the RP for the domain. Notice the difference between the (*,G) entries of R1 and R2 before and after the Loopback100 shut down of R2. Now that R1 is already the RP for both domains, the OIL for the (*,239.100.100.100) has added Fa0/0 towards R2 and R2’s (*,239.100.100.100) now includes Fa0/0 as the Incoming interface. R1 has sent its mroute table to R2 through MSDP, causing R6 to be able to receive traffic from R3.
References:
http://www.cisco.com/c/en/us/td/docs/ios/solutions_docs/ip_multicast/White_papers/anycast.html