MSDP and Inter-domain multicast


So far we have seen that PIM (Protocol Independent Multicast) can perfectly satisfy the need for multicast forwarding inside a single domain or autonomous system, which is not the case with multicast applications intended to provide services outside the boundary of a single autonomous system, here comes the role of protocols such MSDP (Multicast Source Discovery Protocol).

With PIM-SM a typical multicast framework inside a single domain is composed of one or more Rendez-Vous points, multicast sources and multicast receivers. Now let’s imagine a company specialized in Video-On Demand content with receivers across Internet, with a typical multicast framework inside each AS to be as close as possible to receivers and let’s suppose that multicast sources in one AS is no more available, and we know that RP is responsible for registering sources and linking them to receivers, so what if an RP in one AS can communicate with the RP in another AS and be able to register with its sources?

Well that’s exactly what MSDP is intended for: make multicast sources available for other AS receivers by communicating this information between RP in different autonomous systems.

In this lab two simplified autonomous systems are considered AS27011 and AS27022 with an RP and multicast source, serving the same group, in each(Figure1).

Figure1: Topology

The scenario is as follow:

  • First, R22 the multicast source is sending multicast traffic 224.1.1.1 to R2 with RP2 as the rendez-vous point inside AS27022.
  • Second, R22 stop sending the multicast traffic and R1 in AS27011 start sending the same multicast group 224.1.1.1

To successfully deploy MSDP (or any other technology or protocol) it is crucial to split the work into several steps and make sure that each step is working perfectly, this will dramatically reduce the time that would be spent troubleshooting issues accumulated with each layer.

  • Basic connectivity & reachability
  • Routing protocol: BGP configuration
  • multicast configuration
  • MSDP configuration
  1. Basic connectivity & reachability

Let’s start by configuring IGP (EIGRP) to insure connectivity between devices:

R1:

router eigrp 10
network 192.168.111.0

no auto-summary

RP1:

router eigrp 10

passive-interface Ethernet0/1

network 1.1.1.3 0.0.0.0

network 172.16.0.1 0.0.0.0

network 192.168.111.0

network 192.168.221.0

no auto-summary

RP2:

router eigrp 10

passive-interface Ethernet0/1

network 1.1.1.2 0.0.0.0

network 172.16.0.2 0.0.0.0

network 192.168.221.0

network 192.168.222.0

network 192.168.223.0

no auto-summary

IGP routing information should not leak between ASs, hence the need to set interfaces between ASs as passive so only networks carried by BGP will be reachable between AS27022 and AS27011.

R22:

router eigrp 10
network 22.0.0.0

network 192.168.223.0

no auto-summary

R2:

router eigrp 10
network 192.168.222.0

no auto-summary

  1. Routing protocol: BGP configuration

     

Do not forget to advertise multicast end-point subnets through BGP (10.10.10.0/24, 22.0.0.0/24 and 192.168.40.0/24) so they can be reachable between the two autonomous systems.

R1:

router bgp 27011
no synchronization

bgp log-neighbor-changes

network 10.10.10.0 mask 255.255.255.0

neighbor 192.168.111.11 remote-as 27011

no auto-summary

RP1:

router bgp 27011
no synchronization

bgp log-neighbor-changes

neighbor 1.1.1.2 remote-as 27022

neighbor 1.1.1.2 ebgp-multihop 2

neighbor 1.1.1.2 update-source Loopback0

neighbor 192.168.111.1 remote-as 27011

no auto-summary

ip route 1.1.1.2 255.255.255.255 192.168.221.22

RP2:

router bgp 27022
no synchronization

bgp log-neighbor-changes

neighbor 1.1.1.3 remote-as 27011

neighbor 1.1.1.3 ebgp-multihop 2

neighbor 1.1.1.3 update-source Loopback0

neighbor 192.168.222.2 remote-as 27022

neighbor 192.168.222.2 route-reflector-client

neighbor 192.168.223.33 remote-as 27022

neighbor 192.168.223.33 route-reflector-client

no auto-summary

ip route 1.1.1.3 255.255.255.255 192.168.221.11

eBGP is configured between loopback interfaces to match MSDP peer relationship, therefore static routes are added in both sides to reach those loopback interfaces.

R22:

router bgp 27022
no synchronization

network 22.0.0.0 mask 255.255.255.0

neighbor 192.168.223.22 remote-as 27022

no auto-summary

R2:

router bgp 27022
no synchronization

network 192.168.40.0

neighbor 192.168.222.22 remote-as 27022

no auto-summary

There is three methods to configure iBGP in AS 27022:

– enable BGP only on the border router RP2 and redistribute needed subnets into BGP (not straightforward).

– configure full mesh iBGP (not consistent with the phyisical topology which is linear).

– configure RP1 as Route reflector.

let’s retain the last option the most optimal in the current situation, whenever this option is possible you better start by considering it to allow more flexibility for future growth of your network, otherwise when things become more complicated you will have to reconfigure BGP from the scratch to use Route Reflector.

Monitoring:

R2:

R2#sh ip bgp
BGP table version is 10, local router ID is 1.1.1.4

Status codes: s suppressed, d damped, h history, * valid, > best, i – internal,

r RIB-failure, S Stale

Origin codes: i – IGP, e – EGP, ? – incomplete

Network Next Hop Metric LocPrf Weight Path

*>i10.10.10.0/24 192.168.221.11 0 100 0 27011 i

*>i22.0.0.0/24 192.168.223.33 0 100 0 i

*> 192.168.40.0 0.0.0.0 0 32768 i

R2#

R22:

R22(config-router)#do sh ip bgp
BGP table version is 8, local router ID is 22.0.0.1

Status codes: s suppressed, d damped, h history, * valid, > best, i – internal,

r RIB-failure, S Stale

Origin codes: i – IGP, e – EGP, ? – incomplete

Network Next Hop Metric LocPrf Weight Path

*>i10.10.10.0/24 192.168.221.11 0 100 0 27011 i

*> 22.0.0.0/24 0.0.0.0 0 32768 i

*>i192.168.40.0 192.168.222.2 0 100 0 i

R22(config-router)#

R1:

R1#sh ip bgp
BGP table version is 10, local router ID is 192.168.111.1

Status codes: s suppressed, d damped, h history, * valid, > best, i – internal,

r RIB-failure, S Stale

Origin codes: i – IGP, e – EGP, ? – incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.10.10.0/24 0.0.0.0 0 32768 i

*>i22.0.0.0/24 192.168.221.22 0 100 0 27022 i

*>i192.168.40.0 192.168.221.22 0 100 0 27022 i

R1#

  1. multicast configuration

R1:

ip multicast-routing
interface Ethernet0/0

ip pim sparse-dense-mode

interface Serial1/0

ip pim sparse-dense-mode

RP1:

ip multicast-routing
interface Loopback1

ip pim sparse-dense-mode

interface Ethernet0/1

ip pim sparse-dense-mode

ip pim send-rp-announce Loopback1 scope 64

ip pim send-rp-discovery scope 64

RP1 router is configured as the RP and mapping agent for the AS27011

RP2:

interface Loopback0
ip pim sparse-dense-mode

interface Ethernet0/1

ip pim sparse-dense-mode

interface Ethernet0/2

ip pim sparse-dense-mode

ip pim send-rp-announce Loopback1 scope 64

ip pim send-rp-discovery scope 64

RP2 router is configured as the RP and mapping agent for the AS27022

R2:

interface Ethernet0/0
ip pim sparse-dense-mode


ip igmp join-group 224.1.1.1

interface Serial1/0

ip pim sparse-dense-mode

Auto-RP is chosen to advertise RP information throughout all interfaces where PIM sparse-dense mode is enabled… including through the link between autonomous systems, this mean that group-to-RP mapping information will be advertised to other ASs PIM routers which can lead to confusion and the result is that a PIM router in one AS will receive information from its local RP telling that it is the RP responsible for a number of groups as well information from external RP announcing their information:

R2#
*Mar 1 02:58:01.315: Auto-RP(0): Received RP-discovery, from 192.168.221.11, RP_cnt 1, ht 181

*Mar 1 02:58:01.319: Auto-RP(0): Update (224.0.0.0/4, RP:172.16.0.1), PIMv2 v1

R2#

RP2(config-if)#

*Mar 1 02:44:05.615: %PIM-6-INVALID_RP_JOIN: Received (*, 224.1.1.1) Join from 0.0.0.0 for invalid RP 172.16.0.1

RP2(config-if)#

And this is not the kind of cooperation intended by MSDP, MSDP allow RPs on one AS to contact multicast sources in other AS, but still responsible for multicast forwarding inside its AS.

The solution is to block service groups 224.0.1.39 and 224.0.1.40 between the two ASs using multicast boundary filtering:

RP1:

access-list 10 deny
224.0.1.39
access-list 10 deny
224.0.1.40

access-list 10 permit any

interface Ethernet0/1


ip multicast boundary 10

RP2:

access-list 10 deny
224.0.1.39
access-list 10 deny
224.0.1.40

access-list 10 permit any

interface Ethernet0/1


ip multicast boundary 10

Multicast monitoring inside AS:

RP2(config)#
*Mar 1 03:26:17.731: Auto-RP(0): Build RP-Announce for
172.16.0.2, PIMv2/v1, ttl 64, ht 181

*Mar 1 03:26:17.735: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 03:26:17.739: Auto-RP(0): Send RP-Announce packet on Ethernet0/2

*Mar 1 03:26:17.743: Auto-RP(0): Send RP-Announce packet on
Serial1/0

*Mar 1 03:26:17.747: Auto-RP: Send RP-Announce packet on Loopback1

*Mar 1 03:26:17.747: Auto-RP(0): Received RP-announce, from 172.16.0.2, RP_cnt 1, ht 181

*Mar 1 03:26:17.751: Auto-RP(0): Added with (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1

*Mar 1 03:26:17.783: Auto-RP(0): Build RP-Discovery packet

RP2(config)#

RP2(config)#

*Mar 1 03:26:17.783: Auto-RP: Build mapping (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1,

*Mar 1 03:26:17.791: Auto-RP(0): Send RP-discovery packet on Ethernet0/2 (1 RP entries)

*Mar 1 03:26:17.799: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

RP2(config)#

RP2 the RP forAS27022 is properly announcing itself to the mapping agent (the same router) which in turn properly announcing RP to local PIM routers R22 and R2:

R22:
R22#

*Mar 1 03:26:24.339: Auto-RP(0): Received RP-discovery, from 192.168.223.22, RP_cnt 1, ht 181

*Mar 1 03:26:24.347: Auto-RP(0): Added with (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1

R22#

R22#sh ip pim rp mapp

PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4


RP 172.16.0.2 (?), v2v1

Info source: 192.168.223.22 (?), elected via Auto-RP

Uptime: 00:04:16, expires: 00:02:41

R22#

R2:
R2#

*Mar 1 03:27:14.175: Auto-RP(0): Received RP-discovery, from 192.168.222.22, RP_cnt 1, ht 181

*Mar 1 03:27:14.179: Auto-RP(0): Update (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1

R2#

R2#sh ip pim rp mapp

PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4


RP 172.16.0.2 (?), v2v1

Info source: 192.168.222.22 (?), elected via Auto-RP

Uptime: 00:09:22, expires: 00:02:37

R2#

  1. MSDP configuration

RP1:

ip msdp peer 1.1.1.2 connect-source Loopback0

RP2:

ip msdp peer 1.1.1.3 connect-source Loopback0

interface loo0 doesn’t need PIM to be enabled on it.

The MSDP peer ID have to match eBGP peer ID

RP1:

RP1#sh ip msdp summ
MSDP Peer Status Summary

Peer Address AS State Uptime/ Reset SA Peer Name

Downtime Count Count

1.1.1.2
27022 Up 01:14:58 0 0 ?

RP1#

RP1(config)#do sh ip bgp summ

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

1.1.1.2 4 27022 79 79 4 0 0 01:15:46 2

192.168.111.1 4 27011 80 80 4 0 0 01:16:03 1

RP1#

RP2:

RP2#sh ip msdp sum
MSDP Peer Status Summary

Peer Address AS State Uptime/ Reset SA Peer Name

Downtime Count Count

1.1.1.3
27011 Up 01:15:48 0 2 ?

RP2#

RP2(config)#

RP2(config)#do sh ip bgp sum

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

1.1.1.3 4 27011 80 80 4 0 0 01:16:33 1

192.168.222.2 4 27022 81 82 4 0 0 01:16:43 1

192.168.223.33 4 27022 81 82 4 0 0 01:16:44 1

RP2#

R22#ping
Protocol [ip]:

Target IP address: 224.1.1.1

Repeat count [1]: 1000000

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Interface [All]: Ethernet0/0

Time to live [255]:

Source address: Loopback2

Type of service [0]:

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 1000000, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Packet sent with a source address of 22.0.0.1

Reply to request 0 from 192.168.222.2, 216 ms

Reply to request 1 from 192.168.222.2, 200 ms

Reply to request 2 from 192.168.222.2, 152 ms

Reply to request 3 from 192.168.222.2, 136 ms

Reply to request 4 from 192.168.222.2, 200 ms

Reply to request 5 from 192.168.222.2, 216 ms

After generating a multicast routing from R22 loo2 interface to the group 224.1.1.1 you can note from the result of the previous extended ping that R2 is responding to it.

RP2#sh ip mroute 224.1.1.1
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:17:35/00:02:38, RP 172.16.0.2, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:


Serial1/0, Forward/Sparse-Dense, 00:17:35/00:02:38

(22.0.0.1, 224.1.1.1), 00:05:57/00:03:28, flags: T


Incoming interface: Ethernet0/2, RPF nbr 192.168.223.33

Outgoing interface list:


Serial1/0, Forward/Sparse-Dense, 00:05:57/00:03:28

RP2#

RP on AS27022 has already built the shared tree with the receiver (*, 224.1.1.1) and registered for the source tree with PIM-DR sending router (22.0.0.1, 224.1.1.1).

Note that the RPF interface that is used to reach the source 22.0.0.1 is e0/2

Now let’s suppose that for some reasons the source R22 stopped sending the multicast group to 224.1.1.1 and in the neighbor AS 27022 a source begin to send multicast traffic to the same group.

RP2#*Mar 1 01:32:42.051: MSDP(0): Received 32-byte TCP segment from 1.1.1.3

*Mar 1 01:32:42.055: MSDP(0): Append 32 bytes to 0-byte msg 102
from 1.1.1.3, qs 1

RP2#

RP2 msdp has received SA message from the MSDP peer at RP1 to inform it about its local source, the group and RP as show in the following output:

RP2#sh ip msdp saMSDP Source-Active Cache – 2 entries

(10.10.10.3, 224.1.1.1), RP 172.16.0.1, BGP/AS 0, 00:26:31/00:05:46, Peer 1.1.1.3

(192.168.111.1, 224.1.1.1), RP 172.16.0.1, BGP/AS 0, 00:26:31/00:05:46, Peer 1.1.1.3

RP2#

RP2#sh ip mroute 224.1.1.1
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 01:34:23/00:03:25, RP 172.16.0.2, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/0, Forward/Sparse-Dense, 01:34:23/00:03:25

(10.10.10.3, 224.1.1.1), 00:30:13/00:03:21, flags: MT


Incoming interface: Ethernet0/1, RPF nbr 192.168.221.11

Outgoing interface list:


Serial1/0, Forward/Sparse-Dense, 00:30:13/00:03:25

(192.168.111.1, 224.1.1.1), 00:30:13/00:03:21, flags:

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/0, Forward/Sparse-Dense, 00:30:13/00:03:24

RP2#

Note the new entry for the group 224.1.1.1 which is (10.10.10.3, 224.1.1.1) flagged as “M” for MSDP created entry and “T” telling that packets has been received on this SPT.

The incoming interface connect the RPF neighbor RP2 towards the source 10.10.10.3 and the outgoing interface send traffic to R2.

RP1#sh ip mroute 224.1.1.1
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:34:29/stopped, RP 172.16.0.1, flags: SP

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list: Null

(10.10.10.3, 224.1.1.1), 00:31:27/00:02:25, flags: TA


Incoming interface: Serial1/0, RPF nbr 192.168.111.1

Outgoing interface list:


Ethernet0/1, Forward/Sparse-Dense, 00:30:31/00:02:50

(192.168.111.1, 224.1.1.1), 00:34:29/00:02:55, flags: PTA

Incoming interface: Serial1/0, RPF nbr 0.0.0.0

Outgoing interface list: Null

RP1#

RP1#

the entry (10.10.10.3, 224.1.1.1) has serial1/0 as incoming interface connected to the RPF neighbor which is the source R1 and forward the traffic to AS27022 to RP2.

“T” for packets has been received on this entry and the RP (MSDP) consider this SPT to be a candidate for MSDP advertisement to other AS.

Advertisements

Multicast over FR NBMA part4 – (multipoint GRE and DMVPN)


This is the fourth part of the document “Multicast over FR NBMA”, this lab focus on deploying multicast over multipoint GRE and DMVPN.

The main advantage of GRE tunneling is its transportation capability, non-ip, broadcast and multicast traffic can be encapsulated inside the unicast GRE which is easily transmitted over Layer2 technologies such Frame Relay and ATM.

Because HUB, SpokeA and SpokeB FR interfaces are in multipoint, we will use multipoint GRE.

Figure1 : lab topology


CONFIGURATION

mGRE configuration:

HUB:

interface Tunnel0
ip address 172.16.0.1 255.255.0.0
no ip redirects

!!PIM sparse-dense mode is enabled on the tunnel not on the physical interface


ip pim sparse-dense-mode

!! a shared key is used for tunnel authentication


ip nhrp authentication cisco

!!The HUB must send all multicast traffic to all spokes that has registered to it


ip nhrp map multicast dynamic

!! Enable NHRP on the interface, must be the same for all participants


ip nhrp network-id 1

!!Because the OSPF network type is broadcast a DR will be elected, so the HUB is assigned the biggest priority to be sure that it will be the DR


ip ospf network broadcast


ip ospf priority 10

!! With small HUB and Spoke networks it is possible to configure static mGRE by pre-configuring the tunnel destination, but will not be able to set the tunnel mode


tunnel source Serial0/0


tunnel mode gre multipoint

!! Set the tunnel identification key and must be identical to the network-id previously configured


tunnel key 1

FR configuration:

interface Serial0/0
ip address 192.168.100.1 255.255.255.0
encapsulation frame-relay

serial restart-delay 0

frame-relay map ip 192.168.100.2 101
broadcast

frame-relay map ip 192.168.100.3 103
broadcast

no frame-relay inverse-arp

Routing configuration:

router ospf 10
router-id 1.1.1.1
network 10.10.20.0 0.0.0.255 area 100


network 172.16.0.0 0.0.255.255 area 0

SpokeA:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.2 255.255.0.0
ip nhrp authentication cisco

!!All multicast traffic will be forwarded to the NBMA next hop IP (HUB).

ip nhrp map multicast 192.168.100.1

!!All spokes know in advance the HUB NBMA and tunnel IP addresses which are static.

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network point-to-multipoint

tunnel source Serial0/0.201

tunnel destination 192.168.100.1

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.201 multipoint

ip address 192.168.100.2 255.255.255.0

frame-relay map ip 192.168.100.1 201 broadcast

Routing configuration:

router ospf 10
router-id 200.200.200.200
network 20.20.20.0 0.0.0.255 area 200

network 172.16.0.0 0.0.255.255 area 0

SpokeB:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.3 255.255.0.0
no ip redirects

ip pim sparse-dense-mode

ip nhrp authentication cisco

ip nhrp map multicast 192.168.100.1

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network broadcast

ip ospf priority 0

tunnel source Serial0/0.301

tunnel mode gre multipoint

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.301 multipoint

ip address 192.168.100.3 255.255.255.0

frame-relay map ip 192.168.100.1 301 broadcast

Routing configuration:

router ospf 10
router-id 3.3.3.3

network 172.16.0.0 0.0.255.255 area 0

network 192.168.39.0 0.0.0.255 area 300

RP (SpokeBnet):

interface Loopback0
ip address 192.168.38.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 192.168.38.1 0.0.0.0 area 300

ip pim send-rp-announce Loopback0 scope 32

Mapping Agent (HUBnet):

interface Loopback0
ip address 10.0.0.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 10.0.0.1 0.0.0.0 area 100

ip pim send-rp-discovery Loopback0 scope 32

Here is the result:

HUB:

HUB# sh ip nhrp
172.16.0.2/32 via 172.16.0.2, Tunnel0 created 01:06:52, expire 01:34:23
Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.2

172.16.0.3/32 via 172.16.0.3, Tunnel0 created 01:06:35, expire 01:34:10

Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.3

HUB#

The HUB has dynamically learnt spoke’s NBMA addresses and corresponding tunnel ip addresses.

HUB#sh ip route
Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is not set

 

1.0.0.0/32 is subnetted, 1 subnets

C 1.1.1.1 is directly connected, Loopback0

20.0.0.0/32 is subnetted, 1 subnets

O IA 20.20.20.20 [110/11112] via 172.16.0.2, 01:08:26, Tunnel0

O IA 192.168.40.0/24 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

C 172.16.0.0/16 is directly connected, Tunnel0

192.168.38.0/32 is subnetted, 1 subnets

O IA 192.168.38.1 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

O IA 192.168.39.0/24 [110/11112] via 172.16.0.3, 01:08:26, Tunnel0

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O 10.0.0.2/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.10.10.0/24 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.0.0.1/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

C 10.10.20.0/24 is directly connected, FastEthernet1/0

C 192.168.100.0/24 is directly connected, Serial0/0

HUB#

The HUB has learnt all spokes local network ip addresses; note that all learnt routes points to the tunnel IP addresses, because the routing protocol is enabled on the top of the logical topology not the physical (figure2).

Figure2 : Logical topology

HUB#sh ip pim neighbors
PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR

Address Prio/Mode

172.16.0.3
Tunnel0 01:06:17/00:01:21 v2 1 / DR S

172.16.0.2
Tunnel0 01:06:03/00:01:40 v2 1 / S

10.10.20.3 FastEthernet1/0 01:07:24/00:01:15 v2 1 / DR S

HUB#

PIM neighbor relationships are established after enabling PIM-Sparse-dense mode on tunnel interfaces.

SpokeBnet#
*Mar 1 01:16:22.055: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181
*Mar 1 01:16:22.059: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

*Mar 1 01:16:22.067: Auto-RP: Send RP-Announce packet on Loopback0

SpokeBnet#

The RP (SpokeBnet) send RP-announces to all those who listen to 224.0.1.39

Hubnet#
*Mar 1 01:16:17.039: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181
*Mar 1 01:16:17.043: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

Hubnet#

*Mar 1 01:16:49.267: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:16:49.271: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

Hubnet#

HUBnet, the mapping agent (MA), listening to 224.0.1.39, has received RP-announces from the RP (SpokeBnet), has updated its records and has sent RP-Discovery to all PIM-SM routers at 224.0.1.40

HUB#
*Mar 1 01:16:47.059: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181
*Mar 1 01:16:47.063: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

HUB#

 

HUB#sh ip pim rp
Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 01:11:49, expires 00:02:44
HUB#

The HUB, as an example, has received the RP-to-group mapping information from the Mapping agent and now know the RP IP address.

Now let’s take a look at the multicast routing table of the RP:

SpokeBnet#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:39:00/stopped, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(10.10.10.1, 239.255.1.1), 00:39:00/00:02:58, flags: T


Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1


Outgoing interface list:


FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(*, 224.0.1.39), 01:24:31/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(192.168.38.1, 224.0.1.39), 01:24:31/00:02:28, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(*, 224.0.1.40), 01:25:42/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:25:40/00:00:00

Loopback0, Forward/Sparse-Dense, 01:25:42/00:00:00

 

(10.0.0.1, 224.0.1.40), 01:23:39/00:02:51, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 01:23:39/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – The shared tree, rooted at the RP, used to push multicast traffic to receivers, “J” flag indicates that traffic has switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – SPT used to forward traffic from the source to the receiver, receive traffic on Fa0/0 ans forward it out of Fa1/0.

(*, 224.0.1.39) and (*, 224.0.1.40) – service group multicast, because it is a PIM sparse-dense mode, traffic for these groups were forwarded to all PIM routers using dense mode, hence the flag “D”.

This way we configured multicast over NBMA using mGRE, no layer2, no restrictions.

By the way, we are just one step far from DMVPN 🙂 all we have to do is configure IPSec VPN that will protect our mGRE tunnel, so let’s do it!

!! IKE phase I parameters
crypto isakmp policy 1
!! 3des as the encryption algorithm

encryption 3des

!! authentication type: simple preshared keys

authentication pre-share

!! Diffie Helman group2 for the exchange of the secret key

group 2

!! isakmp pees are not set because the HUB doesn’t know them yet, they are learned dynamically by NHRP within mGRE

crypto isakmp key cisco address 0.0.0.0 0.0.0.0

crypto ipsec transform-set MyESP-3DES-SHA esp-3des esp-sha-hmac

mode transport

crypto ipsec profile My_profile

set transform-set MyESP-3DES-SHA

 

int tunnel 0

tunnel protection ipsec profile My_profile

 

HUB#sh crypto isakmp sa
dst src state conn-id slot status
192.168.100.1 192.168.100.2 QM_IDLE 2 0 ACTIVE

192.168.100.1 192.168.100.3 QM_IDLE 1 0 ACTIVE

 

HUB#

 

HUB#sh crypto ipsec sa
 interface: Tunnel0

Crypto map tag: Tunnel0-head-0, local addr 192.168.100.1

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.2/255.255.255.255/47/0)


current_peer 192.168.100.2 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1248, #pkts encrypt: 1248, #pkts digest: 1248

#pkts decaps: 129, #pkts decrypt: 129, #pkts verify: 129

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 52, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.2

path mtu 1500, ip mtu 1500

current outbound spi: 0xCEFE3AC2(3472767682)

 

inbound esp sas:


spi: 0x852C8AE0(2234288864)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2003, flow_id: SW:3, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4448676/3482)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xCEFE3AC2(3472767682)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2004, flow_id: SW:4, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4447841/3479)

IV size: 8 bytes

replay detection support: Y

Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.3/255.255.255.255/47/0)

current_peer 192.168.100.3 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1309, #pkts encrypt: 1309, #pkts digest: 1309

#pkts decaps: 23, #pkts decrypt: 23, #pkts verify: 23

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 26, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.3

path mtu 1500, ip mtu 1500

current outbound spi: 0xD5D509D2(3587508690)

 

inbound esp sas:


spi: 0x4507681A(1158113306)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2001, flow_id: SW:1, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4588768/3477)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xD5D509D2(3587508690)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2002, flow_id: SW:2, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4587889/3476)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

HUB#

ISAKMP and IPSec phases are successfully established and security associations are formed.

multicast over DMVPN works perfectly! That’s it!

Multicast over FR NBMA part3 – (PIM-NBMA mode and Auto-RP)


In this third part of the document “Multicast over FR NBMA” we will see how with an RP in one spoke network and MA in the central site we can defeat the issue of Dense mode and forwarding service groups 224.0.1.39 and 224.0.1.40 from one spoke to another.

Such placement of the MA in the central site as a proxy, is ideal to insure the communication between RP and all spokes through separated PVCs.

In this LAB the RP is configured in SpokeBnet (SpokeB site) and the mapping agent in Hubnet (central site).

Figure1: Lab topology


CONFIGURATION

!! All interface PIM mode on all routers are set to “sparse-dense”

ip pim sparse-dense-mode

Configure SpokeBnet to become an RP

pokeBnet(config)#ip pim send-rp-announce Loopback0 scope 32

Configure Hubnet to become a mapping agent

Hubnet(config)#ip pim send-rp-discovery loo0 scope 32

And the result across FR is as follow:

SpokeBnet:

*Mar 1 01:02:03.083: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181

*Mar 1 01:02:03.087: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:02:03.091: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:02:03.091: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

The RP announce itself to all mapping agents in the network (the multicast group 224.0.1.39)

HUBnet:

Hubnet(config)#

*Mar 1 01:01:01.487: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181

*Mar 1 01:01:01.491: Auto-RP(0): Added with (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

*Mar 1 01:01:01.523: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:01:01.523: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:01:01.527: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:01:01.535: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

*Mar 1 01:01:01.539: Auto-RP: Send RP-discovery packet on Loopback0 (1 RP entries)

Hubnet(config)#

The mapping agent has received RP-announces from RP, update its records and send the Group-to-RP information (in this case: RP is responsible for all multicast groups) to the destination multicast 224.0.1.40 (all PIM routers)

SpokeA#

*Mar 1 01:01:53.379: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181

*Mar 1 01:01:53.383: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

SpokeA#

 

SpokeA#sh ip pim rp

Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 00:18:18, expires 00:02:32

SpokeA#

As an example SpokeA across the HUB and FR cloud has received the Group-to-RP mapping information and updates its records.

SpokeBnet#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:35:35/00:03:25, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:06:59/00:02:39

FastEthernet0/0, Forward/Sparse-Dense, 00:35:35/00:03:25

 

(10.10.10.1, 239.255.1.1), 00:03:43/00:02:47, flags: T

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:03:43/00:02:39

 

(*, 224.0.1.39), 00:41:36/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 00:41:36/00:00:00

 

(192.168.38.1, 224.0.1.39), 00:41:36/00:02:23, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 00:41:37/00:00:00

 

(*, 224.0.1.40), 01:03:55/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:03:50/00:00:00

FastEthernet1/0, Forward/Sparse-Dense, 01:03:55/00:00:00

 

(10.0.0.1, 224.0.1.40), 00:35:36/00:02:07, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:35:36/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – This is the shared tree built between the receiver and rooted at the RP (incoming interface=null), the flag “J” indicates that the traffic was switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – is the SPT in question receiving traffic through Fa0/0 and forwarding it to Fa1/0.

Both (*, 224.0.1.39) and (*.224.0.1.40) are flagged with “D” which means that they were multicasted using dense mode, this is the preliminary operation of auto-RP.

SpokeBnet#mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM Reached RP/Core [10.10.10.0/24]

-2 192.168.39.1 PIM [10.10.10.0/24]

-3 192.168.100.1 PIM [10.10.10.0/24]

-4 10.10.20.3 PIM [10.10.10.0/24]

SpokeBnet#

ClientB is receiving the multicast traffic as needed.

The following outputs show how SpokeA networks also receive perfectly the multicast traffic 239.255.1.1:

SpokeA# mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

-3 10.10.20.3 PIM [10.10.10.0/24]

-4 10.10.10.1

SpokeA#

 

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:59:33/stopped, RP 192.168.38.1, flags: SJCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:59:33/00:02:01

 

(10.10.10.1, 239.255.1.1), 00:03:23/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:03:23/00:02:01

 

(*, 224.0.1.40), 00:59:33/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial0/0.201, Forward/Sparse-Dense, 00:59:33/00:00:00

 

(10.0.0.1, 224.0.1.40), 00:44:18/00:02:15, flags: PLTX

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list: Null

 

SpokeA#

One more time Auto-RP in action: PIM has switched from RPT (*, 239.255.1.1) – “J” to SPT (10.10.10.1, 239.255.1.1) – “TJ”

PIM-SM, Auto-RP protocol


OVERVIEW

The two methods available for dynamically finding RP in a multicast network are auto-RP and BSR (Boot-Strap Router), in this lab we will focus our attention on the Cisco proprietary protocol PIM-SM auto-RP, also we will see how a certain “Dense mode” is still haunting us : )

As we have seen in a previous post (PIM-SM static RP) manually configuring a Rendez-Vous Point RP is very simple and straightforward for small networks, however in relatively big networks static RP is a burdensome procedure and prone to errors.

With static RP configuration, each PIM-SM router connects to the preconfigured RP to join the multicast group 224.0.1.40 by building a shared tree (RPT) with it.

The obvious idea about auto-RP is to make PIM-SM routers learn dynamically the RP address, the issue is presented as follow:

PIM-SM routers have to join the multicast group 224.0.1.40 by building a shared tree (RPT) with the RP to be able to learn the address of the RP ??!???

a) One solution is to statically configure an initial RP to let PIM-SM routers join the group 224.0.1.40 and may be learn other RPs that could override the statically configured one; again a manual intervention that break the concept and the logic behind “dynamic” learning, and what if the first (initial) RP fails??

ip pim rp-address <X.X.X.X.> group-list <acl_nbr> {override}

access-list < acl_nbr > permit <Y.Y.Y.Y>

access-list < acl_nbr > deny all

The group-list serves as a protection mechanism against rogue RPs.

b) Finally the origin of this issue is how to join the first two multicast groups 224.0.1.39 and 224.0.1.40, so what if we use PIM-DM for that purpose? Here comes the concept of PIM-SPARSE-DENSE mode: the multicast group 224.0.1.39 and 224.0.1.40 are flooded to the network because all PIM-SM routers need to join them (this is consistent with Dense mode concept) with PIM-DM and then all other groups will used the dynamically learned RP.

With this particular PIM mode it is possible to configure from the RP which groups will use sparse mode and which groups will use dense mode.

The post is organized as follow:

CONFIGURATION

1) CONFIGURATION ANALYSIS

a) no multicast traffic in the network

b) With multicast traffic

2) RP REDUNDANCY

a) Primary RP failure

b) Back from failure

3- Group and RP filtering policy

  1. RP-based filtering

 

CONFIGURATION

Figure1 illustrate the topology used in this lab.

Figure1 topology

A best practice is to enable PIM sparse-dense on all interfaces that susceptible to forward multicast traffic.

R1:

ip multicast-routing

int e0/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int s1/0

ip pim sparse-dense

R2:

ip multicast-routing

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int s1/2

ip pim sparse-dense 

R3:

ip multicast-routing

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int loo0

ip pim sparse-dense 

R5:

ip multicast-routing

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int s1/2

ip pim sparse-dense

int loo0

ip pim sparse-dense 

R4:

ip multicast-routing

int e0/0

ip pim sparse-dense

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense 

R5:(primary RP – 10.5.5.5)

ip pim send-rp-announce Loopback0 scope 255 

R3:(Secondary RP – 10.3.3.3)

ip pim send-rp-announce Loopback0 scope 255 

R2:(mapping agent)

ip pim send-rp-discovery Loopback0 scope 255 

 

 

1) CONFIGURATION ANALYSIS

a) no multicast traffic in the network

R5

R5(config-if)#

*Mar 1 03:46:01.407: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 03:46:01.407: Auto-RP(0): Send RP-Announce packet on Serial1/0

*Mar 1 03:46:01.411: Auto-RP(0): Send RP-Announce packet on Serial1/1

*Mar 1 03:46:01.415: Auto-RP(0): Send RP-Announce packet on Serial1/2

*Mar 1 03:46:01.419: Auto-RP: Send RP-Announce packet on Loopback0

R5(config-if)#do un all 

R5 is announcing itself to the mapping agents on 224.0.1.39

R2(config)#

*Mar 1 03:46:02.587: Auto-RP(0): Build RP-Discovery packet

*Mar 1 03:46:02.587: Auto-RP: Build mapping (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1,

*Mar 1 03:46:02.591: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

*Mar 1 03:46:02.595: Auto-RP(0): Send RP-discovery packet on Serial1/1 (1 RP entries)

*Mar 1 03:46:02.599: Auto-RP(0): Send RP-discovery packet on Serial1/2 (1 RP entries)

*Mar 1 03:46:02.603: Auto-RP: Send RP-discovery packet on Loopback0 (1 RP entries)

*Mar 1 03:46:18.471: Auto-RP(0): Received RP-announce, from 10.5.5.5, RP_cnt 1, ht 181

*Mar 1 03:46:18.475: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 03:46:18.479: Auto-RP(0): Received RP-announce, from 10.5.5.5, RP_cnt 1, ht 181

*Mar 1 03:46:18.483: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 03:46:33.235: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 03:46:33.239: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 03:46:33.243: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 03:46:33.247: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

R2(config)#do un all

R2 the mapping agent is receiving RP-announces from both RPs R5 and R3. Updated its records and send them to all 224.0.1.40.

R2(config)#do sh ip pim rp ma

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1

Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:07:18, expires: 00:02:51


RP 10.3.3.3 (?), v2v1

Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 00:05:50, expires: 00:02:09

R2(config)# 

R2 has both records about R5 and R3, but selected R5 as the network RP because of its higher IP address.

R1

R1(config)#

*Mar 1 03:46:03.683: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 03:46:03.687: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R1(config)#do un all 

R1 the source PIM-SM has already joined the multicast group 224.0.1.40, thereby it is now able to receive RP-Discovery from the mapping agent R2 and it receive only the elected RP R5 (10.5.5.5).

R4

R4(config)#

*Mar 1 03:45:46.459: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 03:45:46.463: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R4(config)# 

The same for R4 the PIM-DR (directly connected to multicast client).

b) With multicast traffic

The multicast source connected to R1 start diffusing multicast group traffic 239.255.1.1 and the client connected to R4 is receiving it.

R1(config)#do sh ip pim rp

Group: 239.255.1.1, RP: 10.5.5.5, v2, uptime 00:17:25, expires 00:02:26

R1(config)# 

 

R4(config)#do sh ip pim rp

Group: 239.255.1.1, RP: 10.5.5.5, v2, v1, uptime 00:16:43, expires 00:02:09

R4(config)# 

Both R1 and R4 now has the correct information about the RP and the group it serves 239.255.1.1.

R1 has registered itself with the RP through a source path tree SPT and R4 has joined the shared tree RPT with RP and received the first multicast packet from forwarded by RP.

Now R4 can build a direct SPT as the path through RP is not the optimal and start receiving multicast traffic directly from R1.

Let’s analyze the multicast routing table on R4 and R5 (RP) that can tell us a lot about the topology:

R4#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:03:33/stopped, RP 10.5.5.5, flags: SC

Incoming interface:
Serial1/1, RPF nbr 192.168.54.5

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:03:33/00:02:42

 

(10.10.10.1, 239.255.1.1), 00:03:33/00:02:57, flags: T

Incoming interface: Serial1/0, RPF nbr 192.168.14.1

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:03:33/00:02:42

 

(*, 224.0.1.39), 04:05:44/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/1, Forward/Sparse-Dense, 04:05:44/00:00:00

Serial1/0, Forward/Sparse-Dense, 04:05:44/00:00:00

 

(10.5.5.5, 224.0.1.39), 00:01:49/00:01:18, flags: PT

Incoming interface: Serial1/1, RPF nbr 192.168.54.5

Outgoing interface list:

Serial1/0, Prune/Sparse-Dense, 00:02:14/00:00:45

 

(*, 224.0.1.40), 04:06:11/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/1, Forward/Sparse-Dense, 04:06:10/00:00:00

Serial1/0, Forward/Sparse-Dense, 04:06:10/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 04:06:11/00:00:00

 

(10.2.2.2, 224.0.1.40), 04:05:29/00:02:24, flags: LT

Incoming interface: Serial1/0, RPF nbr 192.168.14.1

Outgoing interface list:

Serial1/1, Prune/Sparse-Dense, 00:00:40/00:02:22, A

Ethernet0/0, Forward/Sparse-Dense, 04:05:29/00:00:00

 

R4# 

– (10.10.10.1, 239.255.1.1) : PIM-SM has switched to the SPT, shortest path tree, (T:SPT-bit set) between the multicast source router 10.10.10.1 and R4 with s1/0 as the incoming interface and Fa0/0 as the ougoing interface

– (*, 224.0.1.39) and (*, 224.0.1.40) are flagged with “D”, it means that these two groups has been joined using “Dense mode” for R4 to be able to learn dynamically about the RP using 224.0.1.40.

– (*, 239.255.1.1) This is the RPT, the shared tree used to join the RD in the first phase before switching to SPT, hence the “stopped” timer.

– (10.2.2.2, 224.0.1.40) this SPT is used between R4 and R2 (mapping agent) to forward 224.0.1.40 group multicast traffic that carries the information about the RP.

R4#sh ip pim rp

Group: 239.255.1.1, RP: 10.5.5.5, v2, v1, uptime 04:10:28, expires 00:02:17

R4# 

on R5 (RP):

R5(config-router)#do sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:46:31/00:03:29, RP 10.5.5.5, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/1, Forward/Sparse-Dense, 00:46:21/00:03:29

 

(10.10.10.1, 239.255.1.1), 00:46:31/00:02:35, flags: PT

Incoming interface: Serial1/1, RPF nbr 192.168.54.4

Outgoing interface list: Null

R5(config-router)# 

– (*, 239.255.1.1) this is the shared tree between the RP and all PIM-SM routers that wants to join the specific group 239.255.1.1.

– (10.10.10.1, 239.255.1.1) is the SPT built between R1 and R5.

 

2) RP REDUNDANCY

a) Primary RP failure

To simulate RP failure in this particular topology R5 loopback interface serving as RP IP address is shut down, so R5 will no more act as RP but R5 still forward the traffic.

Let’s analyze the debug outputs from various routers in the network:

R2 (the mapping agent):

Before primary RP (R5) failure:

R2(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1


Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 05:42:21, expires: 00:01:39


RP 10.3.3.3 (?), v2v1


Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 05:40:53, expires: 00:02:47

R2(config)# 

After shutting down R5 loopback0:

*Mar 1 09:25:14.685: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 09:25:14.689: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 09:25:14.693: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 09:25:14.697: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 09:26:07.821: Auto-RP(0): Mapping (224.0.0.0/4, RP:10.5.5.5) expired,

*Mar 1 09:26:07.857: Auto-RP(0): Build RP-Discovery packet

*Mar 1 09:26:07.857: Auto-RP: Build mapping (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1,

*Mar 1 09:26:07.861: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

*Mar 1 09:26:07.865: Auto-RP(0): Send RP-discovery packet on Serial1/1 (1 RP entries)

*Mar 1 09:26:07.869: Auto-RP(0): Send RP-discovery packet on Serial1/2 (1 RP entries)

R2 received R3 RP-announce messages, but haven’t not received R5 RP-announces for 3 x (announce interval) = 180 seconds, so mapping information about the group 239.255.1.1 is expired, consequently RP R3 is selected as the RP for the group and forwarded to all PIM-SM through RP-Discovery.

R1:

R1(config)#

*Mar 1 09:27:09.157: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:27:09.161: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

R1(config)#

 

R1(config)#do sh ip pim rp

Group: 239.255.1.1, RP: 10.3.3.3, v2, uptime 00:15:45, expires 00:02:06

R1(config)# 

R1 has received the RP-Discovery and updated RP information

R4:

R4#

*Mar 1 09:27:01.177: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:27:01.181: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

R4# 

 

R4#sh ip pim rp

Group: 239.255.1.1, RP: 10.3.3.3, v2, v1, uptime 00:16:58, expires 00:02:56

R4# 

The same for R4.

b) Back from failure

R2:

*Mar 1 09:51:04.429: Auto-RP(0): Received RP-announce, from 10.5.5.5, RP_cnt 1, ht 181

*Mar 1 09:51:04.437: Auto-RP(0): Added with (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 09:51:04.465: Auto-RP(0): Build RP-Discovery packet

*Mar 1 09:51:04.469: Auto-RP: Build mapping (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1,

*Mar 1 09:51:04.473: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

*Mar 1 09:51:04.477: Auto-RP(0): Send RP-discovery packet on Serial1/1 (1 RP entries)

*Mar 1 09:51:04.481: Auto-RP(0): Send RP-discovery packet on Serial1/2 (1 RP entries)

R2(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1


Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:00:53, expires: 00:02:02


RP 10.3.3.3 (?), v2v1


Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 06:08:25, expires: 00:02:15

R2(config)# 

The mapping agent is receiving again RP-announce from R5 (highest IP that actual RP address of R3) , so select it and send through RP-Discovery to all PIM-SM.

R1:

R1(config)#

*Mar 1 09:57:01.997: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:57:02.001: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R1(config)# 

 

R1(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1


Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:06:58, expires: 00:02:55

R1(config)# 

R4:

R4#

*Mar 1 09:59:53.349: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:59:53.353: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R4#

R4#

R4# sh ip pim rp mapp

PIM Group-to-RP Mappings

 

Group(s) 224.0.0.0/4

RP 10.5.5.5 (?), v2v1


Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:09:04, expires: 00:02:46

R4# 

R1 and R4 updates their group-to-RP mapping information.

 

3- Group and RP filtering policy

a) RP-based filtering

Each RP will be responsible for a particular group, R5 for 239.255.1.1 and R3 for 239.255.1.2

R3:

ip pim send-rp-announce Loopback0 scope 255 group-list RP_GROUP_ACL

!

!

ip access-list standard RP_GROUP_ACL

permit 239.255.1.2

deny any

R5:

ip pim send-rp-announce Loopback0 scope 255 group-list RP_GROUP_ACL

!

!

ip access-list standard RP_GROUP_ACL

permit 239.255.1.1

deny any

R2 (mapping agent):

*Mar 1 10:15:29.565: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 10:15:29.569: Auto-RP(0): Update (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:29.573: Auto-RP(0): Update (-224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:29.577: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 10:15:29.581: Auto-RP(0): Update (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:29.585: Auto-RP(0): Update (-224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:30.269: Auto-RP(0): Build RP-Discovery packet

*Mar 1 10:15:30.269: Auto-RP: Build mapping (-224.0.0.0/4, RP:10.5.5.5), PIMv2 v1,

*Mar 1 10:15:30.273: Auto-RP: Build mapping (239.255.1.1/32, RP:10.5.5.5), PIMv2 v1.

*Mar 1 10:15:30.277: Auto-RP: Build mapping (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1.

*Mar 1 10:15:30.285: Auto-RP(0): Send RP-discovery packet on Serial1/0 (2 RP entries)

*Mar 1 10:15:30.289: Auto-RP(0): Send RP-discovery packet on Serial1/1 (2 RP entries)

*Mar 1 10:15:30.293: Auto-RP(0): Send RP-discovery packet on Serial1/2 (2 RP entries)

R2 has received the RP-announce (on 224.0.1.39) from both RPs with the relevant groups, (-224.0.0.0/4) means that the RP is blocking all the remaining multicast traffic.

After populating it group-to-rp mapping information, R2 send it to all PIM-SM routers on 224.0.1.40

R2(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) (-)224.0.0.0/4


RP 10.5.5.5 (?), v2v1

Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:28:22, expires: 00:02:03

RP 10.3.3.3 (?), v2v1

Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 06:35:53, expires: 00:02:01

Group(s) 239.255.1.1/32


RP 10.5.5.5 (?), v2v1

Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:05:55, expires: 00:02:02

Group(s) 239.255.1.2/32


RP 10.3.3.3 (?), v2v1

Info source: 10.3.3.3 (?), elected via Auto-RP

Uptime: 00:05:56, expires: 00:01:58

R2(config)#

R4:

R4#

*Mar 1 10:16:22.505: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 2, ht 181

*Mar 1 10:16:22.509: Auto-RP(0): Update (-224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 10:16:22.513: Auto-RP(0): Update (239.255.1.1/32, RP:10.5.5.5), PIMv2 v1

*Mar 1 10:16:22.517: Auto-RP(0): Update (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1

R4# 

R4 has received RP-Discovery messages from R2 the mapping agent.

R4# sh ip pim rp mapp

PIM Group-to-RP Mappings

 

Group(s) (-)224.0.0.0/4


RP 10.5.5.5 (?), v2v1

Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:28:52, expires: 00:02:29

Group(s) 239.255.1.1/32


RP 10.5.5.5 (?), v2v1

Info source: 10.2.2.2 (?), elec ted via Auto-RP

Uptime: 00:06:25, expires: 00:02:27

Group(s) 239.255.1.2/32


RP 10.3.3.3 (?), v2v1

Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:06:26, expires: 00:02:29

R4# 

Now R4 has updated it group-to-rp mapping information as configured from the RP’s:

R5 will be responsible for the multicast group 239.255.1.1

R3 will be responsible for the multicast group 239.255.1.2

and any other multicast traffic will be dropped.

CONCLUSION

PIM Sparse-dense mode use Dense mode to multicast groups 224.0.1.39 and 224.0.1.40 traffic so PIM-SM routers can join them and dynamically learn about RPs to join other multicast groups this time using PIM-SM mechanism.

%d bloggers like this: