Multicast over FR NBMA part4 – (multipoint GRE and DMVPN)


This is the fourth part of the document “Multicast over FR NBMA”, this lab focus on deploying multicast over multipoint GRE and DMVPN.

The main advantage of GRE tunneling is its transportation capability, non-ip, broadcast and multicast traffic can be encapsulated inside the unicast GRE which is easily transmitted over Layer2 technologies such Frame Relay and ATM.

Because HUB, SpokeA and SpokeB FR interfaces are in multipoint, we will use multipoint GRE.

Figure1 : lab topology


CONFIGURATION

mGRE configuration:

HUB:

interface Tunnel0
ip address 172.16.0.1 255.255.0.0
no ip redirects

!!PIM sparse-dense mode is enabled on the tunnel not on the physical interface


ip pim sparse-dense-mode

!! a shared key is used for tunnel authentication


ip nhrp authentication cisco

!!The HUB must send all multicast traffic to all spokes that has registered to it


ip nhrp map multicast dynamic

!! Enable NHRP on the interface, must be the same for all participants


ip nhrp network-id 1

!!Because the OSPF network type is broadcast a DR will be elected, so the HUB is assigned the biggest priority to be sure that it will be the DR


ip ospf network broadcast


ip ospf priority 10

!! With small HUB and Spoke networks it is possible to configure static mGRE by pre-configuring the tunnel destination, but will not be able to set the tunnel mode


tunnel source Serial0/0


tunnel mode gre multipoint

!! Set the tunnel identification key and must be identical to the network-id previously configured


tunnel key 1

FR configuration:

interface Serial0/0
ip address 192.168.100.1 255.255.255.0
encapsulation frame-relay

serial restart-delay 0

frame-relay map ip 192.168.100.2 101
broadcast

frame-relay map ip 192.168.100.3 103
broadcast

no frame-relay inverse-arp

Routing configuration:

router ospf 10
router-id 1.1.1.1
network 10.10.20.0 0.0.0.255 area 100


network 172.16.0.0 0.0.255.255 area 0

SpokeA:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.2 255.255.0.0
ip nhrp authentication cisco

!!All multicast traffic will be forwarded to the NBMA next hop IP (HUB).

ip nhrp map multicast 192.168.100.1

!!All spokes know in advance the HUB NBMA and tunnel IP addresses which are static.

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network point-to-multipoint

tunnel source Serial0/0.201

tunnel destination 192.168.100.1

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.201 multipoint

ip address 192.168.100.2 255.255.255.0

frame-relay map ip 192.168.100.1 201 broadcast

Routing configuration:

router ospf 10
router-id 200.200.200.200
network 20.20.20.0 0.0.0.255 area 200

network 172.16.0.0 0.0.255.255 area 0

SpokeB:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.3 255.255.0.0
no ip redirects

ip pim sparse-dense-mode

ip nhrp authentication cisco

ip nhrp map multicast 192.168.100.1

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network broadcast

ip ospf priority 0

tunnel source Serial0/0.301

tunnel mode gre multipoint

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.301 multipoint

ip address 192.168.100.3 255.255.255.0

frame-relay map ip 192.168.100.1 301 broadcast

Routing configuration:

router ospf 10
router-id 3.3.3.3

network 172.16.0.0 0.0.255.255 area 0

network 192.168.39.0 0.0.0.255 area 300

RP (SpokeBnet):

interface Loopback0
ip address 192.168.38.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 192.168.38.1 0.0.0.0 area 300

ip pim send-rp-announce Loopback0 scope 32

Mapping Agent (HUBnet):

interface Loopback0
ip address 10.0.0.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 10.0.0.1 0.0.0.0 area 100

ip pim send-rp-discovery Loopback0 scope 32

Here is the result:

HUB:

HUB# sh ip nhrp
172.16.0.2/32 via 172.16.0.2, Tunnel0 created 01:06:52, expire 01:34:23
Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.2

172.16.0.3/32 via 172.16.0.3, Tunnel0 created 01:06:35, expire 01:34:10

Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.3

HUB#

The HUB has dynamically learnt spoke’s NBMA addresses and corresponding tunnel ip addresses.

HUB#sh ip route
Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is not set

 

1.0.0.0/32 is subnetted, 1 subnets

C 1.1.1.1 is directly connected, Loopback0

20.0.0.0/32 is subnetted, 1 subnets

O IA 20.20.20.20 [110/11112] via 172.16.0.2, 01:08:26, Tunnel0

O IA 192.168.40.0/24 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

C 172.16.0.0/16 is directly connected, Tunnel0

192.168.38.0/32 is subnetted, 1 subnets

O IA 192.168.38.1 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

O IA 192.168.39.0/24 [110/11112] via 172.16.0.3, 01:08:26, Tunnel0

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O 10.0.0.2/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.10.10.0/24 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.0.0.1/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

C 10.10.20.0/24 is directly connected, FastEthernet1/0

C 192.168.100.0/24 is directly connected, Serial0/0

HUB#

The HUB has learnt all spokes local network ip addresses; note that all learnt routes points to the tunnel IP addresses, because the routing protocol is enabled on the top of the logical topology not the physical (figure2).

Figure2 : Logical topology

HUB#sh ip pim neighbors
PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR

Address Prio/Mode

172.16.0.3
Tunnel0 01:06:17/00:01:21 v2 1 / DR S

172.16.0.2
Tunnel0 01:06:03/00:01:40 v2 1 / S

10.10.20.3 FastEthernet1/0 01:07:24/00:01:15 v2 1 / DR S

HUB#

PIM neighbor relationships are established after enabling PIM-Sparse-dense mode on tunnel interfaces.

SpokeBnet#
*Mar 1 01:16:22.055: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181
*Mar 1 01:16:22.059: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

*Mar 1 01:16:22.067: Auto-RP: Send RP-Announce packet on Loopback0

SpokeBnet#

The RP (SpokeBnet) send RP-announces to all those who listen to 224.0.1.39

Hubnet#
*Mar 1 01:16:17.039: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181
*Mar 1 01:16:17.043: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

Hubnet#

*Mar 1 01:16:49.267: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:16:49.271: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

Hubnet#

HUBnet, the mapping agent (MA), listening to 224.0.1.39, has received RP-announces from the RP (SpokeBnet), has updated its records and has sent RP-Discovery to all PIM-SM routers at 224.0.1.40

HUB#
*Mar 1 01:16:47.059: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181
*Mar 1 01:16:47.063: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

HUB#

 

HUB#sh ip pim rp
Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 01:11:49, expires 00:02:44
HUB#

The HUB, as an example, has received the RP-to-group mapping information from the Mapping agent and now know the RP IP address.

Now let’s take a look at the multicast routing table of the RP:

SpokeBnet#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:39:00/stopped, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(10.10.10.1, 239.255.1.1), 00:39:00/00:02:58, flags: T


Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1


Outgoing interface list:


FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(*, 224.0.1.39), 01:24:31/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(192.168.38.1, 224.0.1.39), 01:24:31/00:02:28, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(*, 224.0.1.40), 01:25:42/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:25:40/00:00:00

Loopback0, Forward/Sparse-Dense, 01:25:42/00:00:00

 

(10.0.0.1, 224.0.1.40), 01:23:39/00:02:51, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 01:23:39/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – The shared tree, rooted at the RP, used to push multicast traffic to receivers, “J” flag indicates that traffic has switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – SPT used to forward traffic from the source to the receiver, receive traffic on Fa0/0 ans forward it out of Fa1/0.

(*, 224.0.1.39) and (*, 224.0.1.40) – service group multicast, because it is a PIM sparse-dense mode, traffic for these groups were forwarded to all PIM routers using dense mode, hence the flag “D”.

This way we configured multicast over NBMA using mGRE, no layer2, no restrictions.

By the way, we are just one step far from DMVPN 🙂 all we have to do is configure IPSec VPN that will protect our mGRE tunnel, so let’s do it!

!! IKE phase I parameters
crypto isakmp policy 1
!! 3des as the encryption algorithm

encryption 3des

!! authentication type: simple preshared keys

authentication pre-share

!! Diffie Helman group2 for the exchange of the secret key

group 2

!! isakmp pees are not set because the HUB doesn’t know them yet, they are learned dynamically by NHRP within mGRE

crypto isakmp key cisco address 0.0.0.0 0.0.0.0

crypto ipsec transform-set MyESP-3DES-SHA esp-3des esp-sha-hmac

mode transport

crypto ipsec profile My_profile

set transform-set MyESP-3DES-SHA

 

int tunnel 0

tunnel protection ipsec profile My_profile

 

HUB#sh crypto isakmp sa
dst src state conn-id slot status
192.168.100.1 192.168.100.2 QM_IDLE 2 0 ACTIVE

192.168.100.1 192.168.100.3 QM_IDLE 1 0 ACTIVE

 

HUB#

 

HUB#sh crypto ipsec sa
 interface: Tunnel0

Crypto map tag: Tunnel0-head-0, local addr 192.168.100.1

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.2/255.255.255.255/47/0)


current_peer 192.168.100.2 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1248, #pkts encrypt: 1248, #pkts digest: 1248

#pkts decaps: 129, #pkts decrypt: 129, #pkts verify: 129

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 52, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.2

path mtu 1500, ip mtu 1500

current outbound spi: 0xCEFE3AC2(3472767682)

 

inbound esp sas:


spi: 0x852C8AE0(2234288864)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2003, flow_id: SW:3, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4448676/3482)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xCEFE3AC2(3472767682)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2004, flow_id: SW:4, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4447841/3479)

IV size: 8 bytes

replay detection support: Y

Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.3/255.255.255.255/47/0)

current_peer 192.168.100.3 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1309, #pkts encrypt: 1309, #pkts digest: 1309

#pkts decaps: 23, #pkts decrypt: 23, #pkts verify: 23

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 26, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.3

path mtu 1500, ip mtu 1500

current outbound spi: 0xD5D509D2(3587508690)

 

inbound esp sas:


spi: 0x4507681A(1158113306)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2001, flow_id: SW:1, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4588768/3477)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xD5D509D2(3587508690)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2002, flow_id: SW:2, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4587889/3476)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

HUB#

ISAKMP and IPSec phases are successfully established and security associations are formed.

multicast over DMVPN works perfectly! That’s it!

Multicast over FR NBMA part3 – (PIM-NBMA mode and Auto-RP)


In this third part of the document “Multicast over FR NBMA” we will see how with an RP in one spoke network and MA in the central site we can defeat the issue of Dense mode and forwarding service groups 224.0.1.39 and 224.0.1.40 from one spoke to another.

Such placement of the MA in the central site as a proxy, is ideal to insure the communication between RP and all spokes through separated PVCs.

In this LAB the RP is configured in SpokeBnet (SpokeB site) and the mapping agent in Hubnet (central site).

Figure1: Lab topology


CONFIGURATION

!! All interface PIM mode on all routers are set to “sparse-dense”

ip pim sparse-dense-mode

Configure SpokeBnet to become an RP

pokeBnet(config)#ip pim send-rp-announce Loopback0 scope 32

Configure Hubnet to become a mapping agent

Hubnet(config)#ip pim send-rp-discovery loo0 scope 32

And the result across FR is as follow:

SpokeBnet:

*Mar 1 01:02:03.083: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181

*Mar 1 01:02:03.087: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:02:03.091: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:02:03.091: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

The RP announce itself to all mapping agents in the network (the multicast group 224.0.1.39)

HUBnet:

Hubnet(config)#

*Mar 1 01:01:01.487: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181

*Mar 1 01:01:01.491: Auto-RP(0): Added with (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

*Mar 1 01:01:01.523: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:01:01.523: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:01:01.527: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:01:01.535: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

*Mar 1 01:01:01.539: Auto-RP: Send RP-discovery packet on Loopback0 (1 RP entries)

Hubnet(config)#

The mapping agent has received RP-announces from RP, update its records and send the Group-to-RP information (in this case: RP is responsible for all multicast groups) to the destination multicast 224.0.1.40 (all PIM routers)

SpokeA#

*Mar 1 01:01:53.379: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181

*Mar 1 01:01:53.383: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

SpokeA#

 

SpokeA#sh ip pim rp

Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 00:18:18, expires 00:02:32

SpokeA#

As an example SpokeA across the HUB and FR cloud has received the Group-to-RP mapping information and updates its records.

SpokeBnet#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:35:35/00:03:25, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:06:59/00:02:39

FastEthernet0/0, Forward/Sparse-Dense, 00:35:35/00:03:25

 

(10.10.10.1, 239.255.1.1), 00:03:43/00:02:47, flags: T

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:03:43/00:02:39

 

(*, 224.0.1.39), 00:41:36/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 00:41:36/00:00:00

 

(192.168.38.1, 224.0.1.39), 00:41:36/00:02:23, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 00:41:37/00:00:00

 

(*, 224.0.1.40), 01:03:55/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:03:50/00:00:00

FastEthernet1/0, Forward/Sparse-Dense, 01:03:55/00:00:00

 

(10.0.0.1, 224.0.1.40), 00:35:36/00:02:07, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:35:36/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – This is the shared tree built between the receiver and rooted at the RP (incoming interface=null), the flag “J” indicates that the traffic was switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – is the SPT in question receiving traffic through Fa0/0 and forwarding it to Fa1/0.

Both (*, 224.0.1.39) and (*.224.0.1.40) are flagged with “D” which means that they were multicasted using dense mode, this is the preliminary operation of auto-RP.

SpokeBnet#mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM Reached RP/Core [10.10.10.0/24]

-2 192.168.39.1 PIM [10.10.10.0/24]

-3 192.168.100.1 PIM [10.10.10.0/24]

-4 10.10.20.3 PIM [10.10.10.0/24]

SpokeBnet#

ClientB is receiving the multicast traffic as needed.

The following outputs show how SpokeA networks also receive perfectly the multicast traffic 239.255.1.1:

SpokeA# mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

-3 10.10.20.3 PIM [10.10.10.0/24]

-4 10.10.10.1

SpokeA#

 

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:59:33/stopped, RP 192.168.38.1, flags: SJCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:59:33/00:02:01

 

(10.10.10.1, 239.255.1.1), 00:03:23/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:03:23/00:02:01

 

(*, 224.0.1.40), 00:59:33/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial0/0.201, Forward/Sparse-Dense, 00:59:33/00:00:00

 

(10.0.0.1, 224.0.1.40), 00:44:18/00:02:15, flags: PLTX

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list: Null

 

SpokeA#

One more time Auto-RP in action: PIM has switched from RPT (*, 239.255.1.1) – “J” to SPT (10.10.10.1, 239.255.1.1) – “TJ”

Multicast over FR NBMA part2 – (PIM NBMA mode and static RP)


This is the second part of the document “multicast over FR NBMA”, this lab focus on the deployment of PIM NBMA mode in a HUB and Spoke FR NBMA network with static Rendez-vous Point configuration.

Figure1 illustrates the Lab topology used, the HUB is connected to the FR NBMA through the main physical interface, SpokeA and SpokeB are connected through multipoint sub-interface, the FR NBMA is a partial mesh topology with two PVCs from each spoke to the HUB main physical interface.

Figure1 : Lab topology

I – SCENARIO I

PIM-Sparse mode is used with Static RP role played by the HUB.

1 – CONFIGURATION

HUB:

Frame Relay configuration:

interface Serial0/0

ip address 192.168.100.1 255.255.255.0

encapsulation frame-relay

!! Inverse ARP is disabled

no frame-relay inverse-arp

!! OSPF protocol have to be consistent with the Frame Relay !!topology, in this case it is HUB & Spoke in which traffic destined !!to spokes will be forwarded to the HUB then from there to the !!spoke.


ip ospf network point-to-multipoint

!! With inverse ARP disabled you have to configure FR mapping and do !!not forget to enable “pseudo broadcasting” at the end of frame !!relay maps

frame-relay map ip 192.168.100.2 101 broadcast

frame-relay map ip 192.168.100.3 103 broadcast

Multicast configuration:

interface Loopback0

ip address 192.168.101.1 255.255.255.255

!! Enable Sparse mode on all interfaces

ip pim sparse-mode

interface Serial0/0

!! Enable PIM NBMA mode


ip pim nbma-mode

ip pim sparse-mode

!! Enable multicast routing, the IOS will alert you if you are !!trying to use multicast commands without if you forgot it

ip multicast-routing

ip pim rp-address 192.168.101.1

Routing configuration:

router ospf 10

network 10.10.10.0 0.0.0.255 area 100


network 192.168.100.0 0.0.0.255 area 0

SpokeA:

Frame Relay configuration:

interface Serial0/0

no ip address

encapsulation frame-relay

no frame-relay inverse-arp

interface Serial0/0.201 multipoint

ip address 192.168.100.2 255.255.255.0

frame-relay map ip 192.168.100.1 201 broadcast 

Multicast configuration:

interface Serial0/0.201 multipoint

ip pim nbma-mode

ip pim sparse-mode

ip pim rp-address 192.168.101.1

ip multicast-routing 

Routing protocol configuration:

router ospf 10

network 20.20.20.0 0.0.0.255 area 200

network 192.168.100.0 0.0.0.255 area 0

interface Serial0/0.201 multipoint

ip ospf network point-to-multipoint 

SpokeB:

Frame Relay configuration:

interface Serial0/0

no ip address

encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

interface Serial0/0.301 multipoint

ip address 192.168.100.3 255.255.255.0

frame-relay map ip 192.168.100.1 301 broadcast 

Multicast configuration:

ip pim rp-address 192.168.101.1

ip multicast-routing

interface Serial0/0.301 multipoint

ip pim sparse-mode

ip pim nbma-mode 

Routing Protocol configuration:

router ospf 10

network 192.168.40.0 0.0.0.255 area 300

network 192.168.100.0 0.0.0.255 area 0

interface Serial0/0.301 multipoint

ip ospf network point-to-multipoint

 

2 – ANALYSIS

SpokeB:

SpokeB#sh ip route

Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP

D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is not set

 

20.0.0.0/32 is subnetted, 1 subnets

O IA 20.20.20.20 [110/129] via 192.168.100.1, 00:01:16, Serial0/0.301

C 192.168.40.0/24 is directly connected, FastEthernet1/0

10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks

O IA 10.0.0.2/32 [110/66] via 192.168.100.1, 00:01:16, Serial0/0.301

O IA 10.10.10.0/24 [110/65] via 192.168.100.1, 00:01:16, Serial0/0.301

O IA 10.0.0.1/32 [110/66] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.102.0/32 is subnetted, 1 subnets

O 192.168.102.1 [110/65] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.100.0/24 is variably subnetted, 3 subnets, 2 masks

C 192.168.100.0/24 is directly connected, Serial0/0.301

O 192.168.100.1/32 [110/64] via 192.168.100.1, 00:01:16, Serial0/0.301

O 192.168.100.2/32 [110/128] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.101.0/32 is subnetted, 1 subnets

O 192.168.101.1 [110/65] via 192.168.100.1, 00:01:17, Serial0/0.301

SpokeB#

The connectivity is successful and SpokeB has received correct routing information about the network

SpokeB#sh ip pim rp

Group: 239.255.1.1, RP: 192.168.101.1, uptime 00:17:31, expires
never

Group: 224.0.1.40, RP: 192.168.101.1, uptime 00:17:31, expires never

SpokeB# 

Because we use PIM-SM with static RP, the group-to-RP mapping will never expire

SpokeB#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:46:09/stopped, RP 192.168.101.1, flags: SJC

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse, 00:46:09/00:02:38

 

(10.10.10.1, 239.255.1.1), 00:22:22/00:02:50, flags: JT

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse, 00:22:22/00:02:38

 

(*, 224.0.1.40), 00:18:10/00:02:24, RP 192.168.101.1, flags: SJCL

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

Serial0/0.301, 192.168.100.3, Forward/Sparse, 00:14:36/00:00:21

 

SpokeB# 

 

SpokeB#mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM Reached RP/Core [10.10.10.0/24]

SpokeB# 

According to the two previous outputs, The spokeB (PIM-DR) has joined the shared (RPT) tree rooted at the RP/Core (the HUB) and the traffic across this shared tree is entering the FR interface and forwarded to the LAN interface Fa1/0.

Because it is a static RP configuration there is only one multicast group service 224.0.1.40 through which a shared tree is built for PIM-SM routers to communicate with the RP.

HUB:

HUB#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:48:20/00:03:22, RP 192.168.101.1, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.3, Forward/Sparse, 00:48:20/00:03:22


Serial0/0, 192.168.100.2, Forward/Sparse, 00:48:20/00:03:09

 

(10.10.10.1, 239.255.1.1), 00:25:33/00:03:26, flags: T

Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:25:33/00:03:08


Serial0/0, 192.168.100.3, Forward/Sparse, 00:25:33/00:03:22

 

(10.10.10.3, 239.255.1.1), 00:00:22/00:02:37, flags: T

Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:00:22/00:03:08


Serial0/0, 192.168.100.3, Forward/Sparse, 00:00:22/00:03:22

 

(192.168.100.1, 239.255.1.1), 00:00:23/00:02:36, flags:

Incoming interface: Serial0/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:00:23/00:03:07


Serial0/0, 192.168.100.3, Forward/Sparse, 00:00:23/00:03:20

 

(*, 224.0.1.40), 00:32:02/00:03:09, RP 192.168.101.1, flags: SJCL

Incoming interface: Null, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:32:00/00:03:09


Serial0/0, 192.168.100.3, Forward/Sparse, 00:32:02/00:02:58

 

HUB# 

Note that for all RPT and SPT the HUB consider two “next-hops” out of the same Serial0/0 interface, this is the work of PIM NBMA mode that makes the layer3 PIM aware of the real layer 2 topology which is not Broadcast but separated “point-to-point PVCs”, as against using only pseudo broadcast that will give the layer 3 an illusion of multi-access network.

Because the shared tree (*,239.255.1.1) is originated locally, the incoming interface list is null.

A source routed tree (10.10.10.1, 239.255.1.1) is also built between the source and the RP.

SpokeA:

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:07:17/stopped, RP 192.168.101.1, flags: SJCLF

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:07:17/00:02:39

 

(10.10.10.1, 239.255.1.1), 00:43:08/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:43:08/00:02:39

 

(*, 224.0.1.40), 00:38:39/00:02:42, RP 192.168.101.1, flags: SJCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:38:39/00:02:42

 

SpokeA# 

 

SpokeA#mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM Reached RP/Core [10.10.10.0/24]

SpokeA# 

SpokeA has received the first packet through the RP, then switched to SPT (10.10.10.1, 239.255.1.1).

 

II – Scenario II

In this scenario the SpokeB play the role of the RP.

1 – CONFIGURATION

Multicast configuration:

The new static RP should be configured on all routers, 192.168.103.1 is a SpokeB loopback interface and it is advertised and reached from everywhere.

SpokeB:

ip pim rp-address 192.168.103.1

 

SpokeB(config)#do mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM Reached RP/Core [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

SpokeB(config)# 

Now the multicast traffic received by SpokeB client is forwarded through the new RP (SpokeB) which is in the best path from the source to the receiver.

SpokeA:

SpokeA#mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

-3 10.10.10.1

SpokeA# 

 

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:24:27/stopped, RP 192.168.103.1, flags: SJCLF

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback0, Forward/Sparse, 01:24:27/00:02:29

 

(10.10.10.1, 239.255.1.1), 01:00:18/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:00:18/00:02:29

 

(*, 224.0.1.40), 00:06:18/00:02:37, RP 192.168.103.1, flags: SJCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial0/0.201, 192.168.100.1, Forward/Sparse, 00:06:02/00:00:37

Loopback0, Forward/Sparse, 00:06:18/00:02:37

 

SpokeA# 

It seems that SpokeA has switched from RPT to SPT (10.10.10.1,239.255.1.1) (SJCLF: Sparse mode, join SPT, Register Flag and locally connected requester)

By the way let’s see what will happen if we disable the SPT switchover by providing the following command at SpokeA:

ip pim spt-threshold infinity

 

SpokeA(config)#do sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:44:47/00:02:59, RP 192.168.103.1, flags: SCLF

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:44:47/00:02:17

 

(*, 224.0.1.40), 00:26:39/00:02:11, RP 192.168.103.1, flags: SCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:26:39/00:02:11

 

SpokeA(config)# 

No more SPT! only shared tree (*.239.255.1.1) rooted at the RP (spokeB)

Multicast over FR NBMA part1 – (pseudo broadcast, PIM NBMA mode, mGRE and DMVPN)


This lab focus on Frame Relay NBMA, Non-Broadcast Multiple Access… the confusion already begin from the title : ) a multiple access network shouldn’t support broadcast?

In this first part I will try to describe the problematic forwarding of multicast over FR NBMA as well as different solutions to remediate to the problem, subsequent parts will focus on the deployment of those solution.

Well, the issue is a result of the difference between how layer2 operates and how layer 3 consider the result of layer2 functioning.

Let’s begin by the layer2, in the case of Frame Relay it is composed of Permanent Virtual Circuits each one is delimited by a physical interface in the both ends and a local Data Link Connection identifier (DLCI)mapped to each end; so there is no multi-access network at all at the layer 2, just the fact that multiple PVCs ends with a single physical interface or sub-interface gives the illusion that it is a multiple access network (figure1).

Figure1 : layer2 and layer3 views of the network

With such configuration of multipoint interfaces or sub-interfaces a single subnet is used at the layer 3 for all participant of the NBMA which makes the layer think that it is a multiple access network.

Let’s precise that we are talking here about partial meshed NBMA.

All this misunderstanding between layer2 and layer3 about the nature of the medium makes that broadcast and multicast doesn’t work.

Many solutions are proposed:

1 – “Pseudo broadcast” – IOS feature that emulates the work of multicast.

Figure2 : Interface HW queues

In fact a physical interface has two hardware queues (figure2) one for unicast and one (strict priority) for broadcast traffic, and because the layer 3 think that it is a multi-access network it will send only one packet over the interface and hope all participants in the other part will receive that packet, which will not happen of course.

Do you remember the word “broadcast” added at the end of a static frame relay mapping?

Frame-relay map ip X.X.X.X <DLCI> broadcast

This activate the feature of pseudo broadcasting, the router before sending a broadcast packet, will make “n” copies and send them to the “n” spokes attached to the NBMA whether they requested it or not. Finally such mechanism is against the concept of multicasting, therefore pseudo broadcast can lead to serious issues of performance in large multicast networks.

Another issue with pseudo broadcast in an NBMA HUB & spoke topology is when one spoke forward multicast data or control traffic that other spokes are supposed to receive, particularly PIM prune and override messages.

The concept is based on the fact that if one downstream PIM router send a prune message to the upstream router, other downstream routers connected to the same multi-access network will receive the prune and those who still want to receive the multicast traffic will send a prune override (join) message to the upstream router.

As we mentioned before the HUB is capable of emulating multicast by making multiple copies of the packet, but only for traffic forwarded from HUB internal interfaces to the NBMA interface or just generated at the HUB router, but not if traffic is received from the NBMA interface.

Conclusion è do not use PIM-DM nor PIM-SPARSE-DENSE (auto-RP) mode with pseudo broadcasting, however you can use PIM-SM with static RP.


2 – PIM NBMA mode
– makes more clear relationship between layer2 and layer3, NBMA mode will tell “the truth” to the layer 3: that the medium is in reality not a multi-access but a partial meshed NBMA so PIM can act accordingly.

The router now will track IP addresses of neighbors from which it received a join message and make an outgoing interface list of such neighbors for RPT (*,G) and SPT (S,G).

PIM NBMA mode still do not support DENSE mode nevertheless it is possible to use Auto-RP with PIM-SPARSE-DENSE mode ONLY if you make sure that the mapping agent is located on the HUB network so it can communicate with all spokes.

3 – Point-to-point sub-interfaces – If you want to avoid all previously mentioned complications with multipoint interfaces in an NBMA HUB & Spoke, you can choose to deploy point-to-point sub-interfaces, spokes will be treated as they are connected to the HUB through separated physical interfaces with separated IP subnets and the layer 2 is no more considered multi-access.

4 – GRE Tunneling – I would like to mention another alternative by keeping the NBMA multipoint FR interface with all that can be considered as advantages such saving IP addresses, and just make it transparent by deploying tunnel technology multipoint-GRE.

With a multipoint GRE logical topology over the layer3 in which multicast will be encapsulated inside GRE packets and forwarded as unicast and decapsulated at the layer3 on the other side of the tunnel without dealing with layer2.

Bidirectional PIM, Bidir-PIM


Overview

PIM-SM rely on RP (Rendez-Vous Point) to manage the forwarding of a group multicast traffic along a shared-Tree (*,G) (RPT) downstream from the RP to receivers and a Source Path Trees (SPT) between the RP and the PIM-SM router (to which the multicast source is connected). The number of those SPT grows with the number of multicast sources, which make PIM-SM not efficient for such network configuration.

Bidirectional PIM acts a little bit differently from PIM-SM, and particularly with SPT, simply there is no source-based trees, instead RP builds a shared tree through which source routers forward traffic downstream toward the RP in the same time RP can use the same shared tree to send the multicast traffic received upstream to a receiver; which make Bidir-PIM scale to any number of sources without any overhead.

Because there is no SPT, there could not be any SPT switchover and the traffic forwarding will always pass through RP.

Consequently Bidir-PIM routers will not perform any RPF (Reverse Path Forwarding) check that will prevent sending traffic back to the interface from which it was received, but as we know the RPF is used in a network with redundant links to avoid loops by checking whether the multicast traffic is received on an interface from which the multicast source is reachable.

Bidir-PIM use a concept of an elected Designated Forwarder (DF) that establishes a loop-free STP Shared Tree routed at the RP.

– On each point-to-point link and every network segment one DF is elected for every RP of Bidirectional group, the DF will be responsible for forwarding multicast traffic received on that network.

– The election of DF is based on the best unicast routing metric of the path to the RP, and consider only one path (no Assert messages) the order is as follow:

– Best Administrative Distance.

– Best Metric.

– And then highest IP address.

This way multiple routers can be elected as DF on a segment, one for each RP and one router can be elected DF on more that one interface.

Both Bidir-PIM and the traditional PIM-SM can perfectly coexist in the same PIM domain using the same RPs.

Figure 1 illustrates the network topology used to configure and analyse Bidirectional PIM.

Figure 1 Network Topology

topology1
The Lab is organized as follow:

Overview

Configuration

  • Traffic downstream along the shared tree
  • Traffic upstream along the shared tree
  • Add unidirectional PIM-SM group multicast traffic along with Bidirectional PIM traffic
  • Multicast another group 239.255.1.3 not assigned in both PIM access-lists

Configuration

1) Enable multicast routing

ip multicast-routing

2) Enable Bifirectional PIM

ip pim bidir-enable

2) Enable PIM (Sparse-dense mode) on all interfaces that can potentially handle multicast traffic

interface <int>
ip pim sparse-dense-mode

3) Configure the routing protocol and make sure that connectivity is successful and that any used loopback interfaces are advertised and reachable.

4) Depending ono your network you can configure RP and mapping agent in different routers or in the same router, as in PIM-SM the center of the multicast network is RP, that should be configured to work as Bidirectional RP by adding the keyword “bidir”

ip pim send-rp-announce <Loopback_int> scope <TTL> {group-list <ACL>} bidir

Do not use the same loopback interface for both RP and the mapping-agent, PIM routers expect different IP addresses for both functions.

ip pim send-rp-discovery scope <TTL> {<Loopback_int>}

Traffic downstream along the shared tree

In this case traffic flow is exactly the same as with PIM-SM because of the placement of the client in the topology, not in the path between the source and the RP.

The configuration is exactly the same as mentioned earlier with the RP configured as follow:

ip pim bidir-enable
ip multicast-routing
ip pim send-rp-announce
Loopback0
scope 32 bidir
ip pim send-rp-discovery
Loopback0
scope 32

R4 (RP & mapping agent):

R4#
*Mar 1 03:36:59.263: Auto-RP(0): Build RP-Discovery packet
*Mar 1 03:36:59.267: Auto-RP: Build mapping (224.0.0.0/4[bidir], RP:10.4.4.4), PIMv2 v1,
*Mar 1 03:36:59.271: Auto-RP(0): Send RP-discovery packet on Ethernet0/0 (1 RP entries)
*Mar 1 03:36:59.275: Auto-RP(0): Send RP-discovery packet on Ethernet0/1 (1 RP entries)
R4#

Router R4 builds RP-to-group mapping with the RP (itself) send it out of all interfaces to R2 and R5.

R2#mrinfo
172.16.12.2 [version 12.3] [flags: PMSA]:
172.16.12.2 -> 172.16.12.1 [1/0/pim/querier]
172.16.24.2 -> 172.16.24.4 [1/0/pim]
172.16.23.2 -> 0.0.0.0 [1/0/pim/querier/leaf]
R2#
R4#mrinfo
172.16.24.4 [version 12.3] [flags: PMSA]:
172.16.24.4 -> 172.16.24.2 [1/0/pim/querier]
172.16.45.4 -> 172.16.45.5 [1/0/pim]
10.4.4.4 -> 0.0.0.0 [1/0/pim/querier/leaf]
R4#
R5#mrinfo
172.16.45.5 [version 12.3] [flags: PMSA]:
192.168.40.5 -> 0.0.0.0 [1/0/pim/querier/leaf]
172.16.45.5 -> 172.16.45.4 [1/0/pim/querier]
R5#

mrinfo” gives information about PIM neighbor connectivity.

Because PIM is enabled on the loopback interface the router shows as an end-point is connected to it.

R2:

*Mar 1 03:25:50.555: Auto-RP(0): Received RP-discovery, from 172.16.24.4 , RP_cnt 1, ht 181
*Mar 1 03:25:50.559: Auto-RP(0): Update (224.0.0.0/4, RP:10.4.4.4), PIMv2 v1, bidi
R2#
R2#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:02:23, expires 00:01:59
R2#

R5:

*Mar 1 03:25:52.223: Auto-RP(0): Received RP-discovery, from 172.16.45.4 , RP_cnt 1, ht 181
*Mar 1 03:25:52.227: Auto-RP(0): Update (224.0.0.0/4, RP:10.4.4.4), PIMv2 v1, bidir
R5#
R5#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:02:13, expires 00:02:09
R5#

All PIM routers have already joined 224.0.1.39 224.0.1.40 and received the RP-Discovery from the RP (R4) and populated the RP-to-group record.

In our case the mapping-agent is not configured with a particular loopback interface, therefore will use the IP address of the outgoing interface, that’s why the MA IP address for R2 and R5 is different.

R5#mtrace 10.10.10.1 192.168.40.104 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.40.104
-1 192.168.40.5 PIM [10.10.10.0/24]
-2 172.16.45.4 PIM Reached RP/Core [10.10.10.0/24]

R5#

The result of “mtrace” shows the built shared tree with RP (R4) as root

R5#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 02:16:30/00:02:54, RP 10.4.4.4, flags: BC

Bidir-Upstream: Ethernet0/0, RPF nbr 172.16.45.4

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 00:21:22/00:02:25

Ethernet0/0, Bidir-Upstream/Sparse-Dense, 00:21:22/00:00:00

(*, 224.0.1.39), 02:14:36/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 02:14:36/00:00:00

(10.4.4.4, 224.0.1.39), 00:01:36/00:01:23, flags: PTX

Incoming interface: Ethernet0/0, RPF nbr 172.16.45.4

Outgoing interface list: Null

(*, 224.0.1.40), 02:16:31/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 02:16:22/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 02:16:32/00:00:00

(172.16.45.4, 224.0.1.40), 00:20:58/00:02:54, flags: LT

Incoming interface: Ethernet0/0, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 00:20:58/00:00:00

R5#

(*, 239.255.1.1) – This shared tree is build by the PIM receiver router, note that the interface E0/0 is considered an Bidirectional incoming interface as well as an outgoing interface which would not be possible with PIM-SM, this mean that Bidir-PIM receive multicast traffic of the group 239.255.1.1 on this interface and in the same time forward the traffic out of this same interface.

R1#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 10.10.10.2 435200 00:22:49
Ethernet0/1 10.4.4.4 172.16.12.2 409600 00:22:48
R1#
R2#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 172.16.12.2 409600 00:23:26
Ethernet0/1 10.4.4.4 172.16.24.4 0 00:23:27
Ethernet0/2 10.4.4.4 172.16.23.2 409600 00:23:26
R2#
R4#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 172.16.24.4 0 02:16:39
Ethernet0/1 10.4.4.4 172.16.45.4 0 01:00:05
Loopback0 10.4.4.4 10.4.4.4 0 02:16:39
R4#
R5#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/1 10.4.4.4 192.168.40.5 409600 00:24:41
Ethernet0/0 10.4.4.4 172.16.45.4 0 00:24:41
R5#

Figure 2 depicts the elected DF according to the output of “ip pim interface df” on R1, R2, R3.

Figure 2 elected DF on each segment:

df14

Traffic upstream along the shared tree
In this case a client connected to R3 (share a segment with the sourceR2-R4 in the path toward the RP) request the multicast traffic.

R1#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 10.10.10.2 435200 00:07:20
Ethernet0/1 10.4.4.4 172.16.12.2 409600 00:07:20
R1#
R2#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 172.16.12.2 409600 00:09:59
Ethernet0/1 10.4.4.4 172.16.24.4 0 00:09:59
Ethernet0/2 10.4.4.4 172.16.23.2 409600 00:09:59
R2#
R3#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/2 10.4.4.4 192.168.0.3 435200 00:02:50
Ethernet0/0 10.4.4.4 172.16.23.2 409600 00:02:51
R3#

Figure 2 depicts the elected DF according to the output of “ip pim interface df” on R1, R2, R3.

Figure 2 elected DF on each segment:

df2

R1#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:12:35, expires 00:02:17
R1#
R2#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:12:56, expires 00:02:57
R2#
R3#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:03:09, expires 00:02:48
R3#

All Bidir-PIM routers has dynamically received RP IP address through Auto-RP.

R3#mtrace 10.10.10.1 192.168.0.2 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 192.168.0.3 PIM [10.10.10.0/24]
-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.24.4 PIM Reached RP/Core [10.10.10.0/24]

R3#

Note that as in any shared tree the path from the receiver goes back up to the RP which is the root of the RPT.

R3#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 00:13:53/00:02:54, RP 10.4.4.4, flags: BC

Bidir-Upstream: Ethernet0/0, RPF nbr 172.16.23.2

Outgoing interface list:

Ethernet0/2, Forward/Sparse-Dense, 00:08:48/00:02:01

Ethernet0/0, Bidir-Upstream/Sparse-Dense, 00:08:48/00:00:00

(*, 224.0.1.39), 00:08:48/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:08:48/00:00:00

(10.4.4.4, 224.0.1.39), 00:00:48/00:02:11, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.23.2

Outgoing interface list: Null

(*, 224.0.1.40), 00:18:41/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:09:24/00:00:00

Ethernet0/2, Forward/Sparse-Dense, 00:18:42/00:00:00

(10.4.4.40, 224.0.1.40), 00:08:49/00:02:14, flags: LT

Incoming interface: Ethernet0/0, RPF nbr 172.16.23.2

Outgoing interface list:

Ethernet0/2, Forward/Sparse-Dense, 00:08:49/00:00:00

R3#

From the multicast routing table you can note that (*, 239.255.1.1) is the shared tree used by R3 to receive multicast traffic from the group 239.255.1.1, there is no (10.10.10.1,239.255.1.1) for the bidirectional group 239.255.1.1; all the remaining trees are related to the multicast group services 224.0.1.39 and 224.0.1.40.

R1:

R1#mtrace 10.10.10.1 192.168.0.2 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 172.16.23.3 PIM [10.10.10.0/24]
-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.24.4 PIM Reached RP/Core [10.10.10.0/24]

R1#

The same result from source and receiver PIM routers point of view, they both joined a shared tree that forward traffic in both direction upstream and downstream hence “Bidirectional” (figure 3).

Figure 3: Shared trees (RPT)

trees

R1#mrinfo
10.10.10.2 [version 12.3] [flags: PMSA]:
10.10.10.2 -> 0.0.0.0 [1/0/pim/querier/leaf]
172.16.12.1 -> 172.16.12.2 [1/0/pim]
R1#
R2#mrinfo
172.16.12.2 [version 12.3] [flags: PMSA]:
172.16.12.2 -> 172.16.12.1 [1/0/pim/querier]
172.16.24.2 -> 172.16.24.4 [1/0/pim]
172.16.23.2 -> 172.16.23.3 [1/0/pim]
R2#
R3#mrinfo
172.16.23.3 [version 12.3] [flags: PMSA]:
192.168.0.3 -> 0.0.0.0 [1/0/pim/querier/leaf]
172.16.23.3 -> 172.16.23.2 [1/0/pim/querier]
R3#

The command “mrinfo” is a good indicator of how PIM has localised PIM routers directly connected to either source or destination end points and forwarding routers.

Add unidirectional PIM-SM group multicast traffic along with Bidirectional PIM traffic

Any configuration statement about what group will use what PIM type have to be set on the RP:

Two distinct loopback interfaces should be configured for each PIM type, in our case: loopback0 for Bidirectional PIM that will route the multicast group traffic 239.255.1.1 and loopback2 for 239.255.1.2

ip pim send-rp-announce Loopback2 scope 32 group-list
UniPIM-Groups
ip pim send-rp-announce Loopback0 scope 32 group-list
BidirPIM-Groups
bidir
ip access-list standard UniPIM-Groups
permit 239.255.1.2
ip access-list standard BidirPIM-Groups
permit 239.255.1.1

And the same client connected to R3 will receive both groups 239.255.1.2 and 239.255.1.1

Let’s see how the RP (R4) handles the two type of PIM

R4(config)#do sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 00:40:57/00:02:50, RP 10.4.4.4, flags: B

Bidir-Upstream: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:40:38/00:02:50

(*, 239.255.1.2), 00:07:11/00:03:25, RP 10.4.4.144, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:01:03/00:03:25

(10.10.10.1, 239.255.1.2), 00:07:11/00:02:00, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.24.2

Outgoing interface list: Null

(*, 224.0.1.39), 01:08:28/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:09:40/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:08:28/00:00:00

Loopback1, Forward/Sparse-Dense, 01:08:48/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:08:48/00:00:00

Loopback0, Forward/Sparse-Dense, 01:08:48/00:00:00

(10.4.4.4, 224.0.1.39), 01:07:47/00:02:25, flags: LT

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:10:00/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:07:47/00:00:00

Loopback1, Forward/Sparse-Dense, 01:07:47/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:07:47/00:00:00

(10.4.4.144, 224.0.1.39), 00:09:53/00:02:07, flags: LT

Incoming interface: Loopback2, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:09:53/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 00:09:53/00:00:00

Loopback1, Forward/Sparse-Dense, 00:09:53/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 00:09:53/00:00:00

(*, 224.0.1.40), 01:08:52/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 01:08:38/00:00:00

Loopback1, Forward/Sparse-Dense, 01:08:56/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:08:58/00:00:00

(10.4.4.40, 224.0.1.40), 01:07:57/00:02:55, flags: LT

Incoming interface: Loopback1, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 01:07:57/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:07:57/00:00:00

R4(config)#

(*, 239.255.1.1) is the shared tree built by bidirectional PIM (flagged B) and routed at the RP itself (incoming interface = null), “Bidirectional” means that 239.255.1.1 traffic can be sent upstream to the RP and downstream from it in the same time.

(*, 239.255.1.2) is the shared tree built by unidirectional PIM Sparse mode (flagged S) and also routed at the RP (incoming interface = null), traffic 239.255.1.2 is sent only downstream from the RP toward receivers.

(10.10.10.1, 239.255.1.2), because 239.255.1.2 is routed by unidirectional PIM-SM, the RP register with the source PIM router and build a source-based tree SPT (S,G) through which the source send its multicast traffic ONLY downstream toward the RP. Note that the tree is flagged as “PT” which mean that PIM-SM has switched to SPT and traffic is no more forwarded through RP because there is a better path between the source and the destination.

R4#mtrace 10.10.10.1 192.168.0.2 239.255.1.2
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.2
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 172.16.23.3 PIM [10.10.10.0/24]
-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.12.1 PIM [10.10.10.0/24]

-4 10.10.10.1

R4#

Multicast traffic for the group 239.255.1.2 is originated from the source to the destination and forwarded by PIM-SM routers, which confirm that PIM-SM is source-based protocol.

R4#
R4#mtrace 10.10.10.1 192.168.0.2 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 172.16.23.3 PIM [10.10.10.0/24]

-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.24.4 PIM Reached RP/Core [10.10.10.0/24]

R4#

Multicast traffic for the group 239.255.1.1 is routed at the source because it build only shared tree, this illustrate perfectly the mechanism of bidirectional PIM

Multicast another group 239.255.1.3 not assigned in both PIM access-lists

R4#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 01:09:26/00:02:52, RP 10.4.4.4, flags: B

Bidir-Upstream: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 01:09:07/00:02:52

(*, 239.255.1.2), 00:35:41/00:02:31, RP 10.4.4.144, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:29:32/00:02:31

(10.10.10.1, 239.255.1.2), 00:35:41/00:01:14, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.24.2

Outgoing interface list: Null

(*, 239.255.1.3), 00:02:58/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 00:02:58/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 00:02:58/00:00:00

(10.10.10.1, 239.255.1.3), 00:02:58/00:00:01, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.24.2

Outgoing interface list:

Ethernet0/1, Prune/Sparse-Dense, 00:02:58/00:00:01

(*, 224.0.1.39), 01:36:58/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:38:10/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:36:58/00:00:00

Loopback1, Forward/Sparse-Dense, 01:36:58/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:36:58/00:00:00

Loopback0, Forward/Sparse-Dense, 01:36:58/00:00:00

(10.4.4.4, 224.0.1.39), 01:35:57/00:02:16, flags: LT

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:38:10/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:35:57/00:00:00

Loopback1, Forward/Sparse-Dense, 01:35:57/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:35:57/00:00:00

(10.4.4.144, 224.0.1.39), 00:38:03/00:02:56, flags: LT

Incoming interface: Loopback2, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:38:03/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 00:38:03/00:00:00

Loopback1, Forward/Sparse-Dense, 00:38:03/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 00:38:03/00:00:00

(*, 224.0.1.40), 01:37:03/00:02:56, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 01:36:40/00:00:00

Loopback1, Forward/Sparse-Dense, 01:36:58/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:37:01/00:00:00

R4#

Note that (*, 239.255.1.3) is flagged as “D” this mean that if a group not assigned in both PIM-SM and Bidir-SM group lists it uses Dense mode.

Conclusion

– PIM-Bidir is more efficient and more scalable than PIM-SM for large number of multicast sources.

– The DF (Dessignated Forwarder) is elected on each segment or point-to-point link to establish loop-free STTP with RP as the root.

– (S,G) join messages are dropped on routers that supports only Bidir-PIM.

– RP never join back a path to the source (no (S,G) tree) nor send any register-stop.

– PIM-Bidir is capable of sending and receiving multicast traffic for the same group along the same shared-tree.

PIM-SM, Auto-RP protocol


OVERVIEW

The two methods available for dynamically finding RP in a multicast network are auto-RP and BSR (Boot-Strap Router), in this lab we will focus our attention on the Cisco proprietary protocol PIM-SM auto-RP, also we will see how a certain “Dense mode” is still haunting us : )

As we have seen in a previous post (PIM-SM static RP) manually configuring a Rendez-Vous Point RP is very simple and straightforward for small networks, however in relatively big networks static RP is a burdensome procedure and prone to errors.

With static RP configuration, each PIM-SM router connects to the preconfigured RP to join the multicast group 224.0.1.40 by building a shared tree (RPT) with it.

The obvious idea about auto-RP is to make PIM-SM routers learn dynamically the RP address, the issue is presented as follow:

PIM-SM routers have to join the multicast group 224.0.1.40 by building a shared tree (RPT) with the RP to be able to learn the address of the RP ??!???

a) One solution is to statically configure an initial RP to let PIM-SM routers join the group 224.0.1.40 and may be learn other RPs that could override the statically configured one; again a manual intervention that break the concept and the logic behind “dynamic” learning, and what if the first (initial) RP fails??

ip pim rp-address <X.X.X.X.> group-list <acl_nbr> {override}

access-list < acl_nbr > permit <Y.Y.Y.Y>

access-list < acl_nbr > deny all

The group-list serves as a protection mechanism against rogue RPs.

b) Finally the origin of this issue is how to join the first two multicast groups 224.0.1.39 and 224.0.1.40, so what if we use PIM-DM for that purpose? Here comes the concept of PIM-SPARSE-DENSE mode: the multicast group 224.0.1.39 and 224.0.1.40 are flooded to the network because all PIM-SM routers need to join them (this is consistent with Dense mode concept) with PIM-DM and then all other groups will used the dynamically learned RP.

With this particular PIM mode it is possible to configure from the RP which groups will use sparse mode and which groups will use dense mode.

The post is organized as follow:

CONFIGURATION

1) CONFIGURATION ANALYSIS

a) no multicast traffic in the network

b) With multicast traffic

2) RP REDUNDANCY

a) Primary RP failure

b) Back from failure

3- Group and RP filtering policy

  1. RP-based filtering

 

CONFIGURATION

Figure1 illustrate the topology used in this lab.

Figure1 topology

A best practice is to enable PIM sparse-dense on all interfaces that susceptible to forward multicast traffic.

R1:

ip multicast-routing

int e0/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int s1/0

ip pim sparse-dense

R2:

ip multicast-routing

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int s1/2

ip pim sparse-dense 

R3:

ip multicast-routing

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int loo0

ip pim sparse-dense 

R5:

ip multicast-routing

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int s1/2

ip pim sparse-dense

int loo0

ip pim sparse-dense 

R4:

ip multicast-routing

int e0/0

ip pim sparse-dense

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense 

R5:(primary RP – 10.5.5.5)

ip pim send-rp-announce Loopback0 scope 255 

R3:(Secondary RP – 10.3.3.3)

ip pim send-rp-announce Loopback0 scope 255 

R2:(mapping agent)

ip pim send-rp-discovery Loopback0 scope 255 

 

 

1) CONFIGURATION ANALYSIS

a) no multicast traffic in the network

R5

R5(config-if)#

*Mar 1 03:46:01.407: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 03:46:01.407: Auto-RP(0): Send RP-Announce packet on Serial1/0

*Mar 1 03:46:01.411: Auto-RP(0): Send RP-Announce packet on Serial1/1

*Mar 1 03:46:01.415: Auto-RP(0): Send RP-Announce packet on Serial1/2

*Mar 1 03:46:01.419: Auto-RP: Send RP-Announce packet on Loopback0

R5(config-if)#do un all 

R5 is announcing itself to the mapping agents on 224.0.1.39

R2(config)#

*Mar 1 03:46:02.587: Auto-RP(0): Build RP-Discovery packet

*Mar 1 03:46:02.587: Auto-RP: Build mapping (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1,

*Mar 1 03:46:02.591: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

*Mar 1 03:46:02.595: Auto-RP(0): Send RP-discovery packet on Serial1/1 (1 RP entries)

*Mar 1 03:46:02.599: Auto-RP(0): Send RP-discovery packet on Serial1/2 (1 RP entries)

*Mar 1 03:46:02.603: Auto-RP: Send RP-discovery packet on Loopback0 (1 RP entries)

*Mar 1 03:46:18.471: Auto-RP(0): Received RP-announce, from 10.5.5.5, RP_cnt 1, ht 181

*Mar 1 03:46:18.475: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 03:46:18.479: Auto-RP(0): Received RP-announce, from 10.5.5.5, RP_cnt 1, ht 181

*Mar 1 03:46:18.483: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 03:46:33.235: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 03:46:33.239: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 03:46:33.243: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 03:46:33.247: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

R2(config)#do un all

R2 the mapping agent is receiving RP-announces from both RPs R5 and R3. Updated its records and send them to all 224.0.1.40.

R2(config)#do sh ip pim rp ma

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1

Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:07:18, expires: 00:02:51


RP 10.3.3.3 (?), v2v1

Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 00:05:50, expires: 00:02:09

R2(config)# 

R2 has both records about R5 and R3, but selected R5 as the network RP because of its higher IP address.

R1

R1(config)#

*Mar 1 03:46:03.683: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 03:46:03.687: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R1(config)#do un all 

R1 the source PIM-SM has already joined the multicast group 224.0.1.40, thereby it is now able to receive RP-Discovery from the mapping agent R2 and it receive only the elected RP R5 (10.5.5.5).

R4

R4(config)#

*Mar 1 03:45:46.459: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 03:45:46.463: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R4(config)# 

The same for R4 the PIM-DR (directly connected to multicast client).

b) With multicast traffic

The multicast source connected to R1 start diffusing multicast group traffic 239.255.1.1 and the client connected to R4 is receiving it.

R1(config)#do sh ip pim rp

Group: 239.255.1.1, RP: 10.5.5.5, v2, uptime 00:17:25, expires 00:02:26

R1(config)# 

 

R4(config)#do sh ip pim rp

Group: 239.255.1.1, RP: 10.5.5.5, v2, v1, uptime 00:16:43, expires 00:02:09

R4(config)# 

Both R1 and R4 now has the correct information about the RP and the group it serves 239.255.1.1.

R1 has registered itself with the RP through a source path tree SPT and R4 has joined the shared tree RPT with RP and received the first multicast packet from forwarded by RP.

Now R4 can build a direct SPT as the path through RP is not the optimal and start receiving multicast traffic directly from R1.

Let’s analyze the multicast routing table on R4 and R5 (RP) that can tell us a lot about the topology:

R4#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:03:33/stopped, RP 10.5.5.5, flags: SC

Incoming interface:
Serial1/1, RPF nbr 192.168.54.5

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:03:33/00:02:42

 

(10.10.10.1, 239.255.1.1), 00:03:33/00:02:57, flags: T

Incoming interface: Serial1/0, RPF nbr 192.168.14.1

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:03:33/00:02:42

 

(*, 224.0.1.39), 04:05:44/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/1, Forward/Sparse-Dense, 04:05:44/00:00:00

Serial1/0, Forward/Sparse-Dense, 04:05:44/00:00:00

 

(10.5.5.5, 224.0.1.39), 00:01:49/00:01:18, flags: PT

Incoming interface: Serial1/1, RPF nbr 192.168.54.5

Outgoing interface list:

Serial1/0, Prune/Sparse-Dense, 00:02:14/00:00:45

 

(*, 224.0.1.40), 04:06:11/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/1, Forward/Sparse-Dense, 04:06:10/00:00:00

Serial1/0, Forward/Sparse-Dense, 04:06:10/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 04:06:11/00:00:00

 

(10.2.2.2, 224.0.1.40), 04:05:29/00:02:24, flags: LT

Incoming interface: Serial1/0, RPF nbr 192.168.14.1

Outgoing interface list:

Serial1/1, Prune/Sparse-Dense, 00:00:40/00:02:22, A

Ethernet0/0, Forward/Sparse-Dense, 04:05:29/00:00:00

 

R4# 

– (10.10.10.1, 239.255.1.1) : PIM-SM has switched to the SPT, shortest path tree, (T:SPT-bit set) between the multicast source router 10.10.10.1 and R4 with s1/0 as the incoming interface and Fa0/0 as the ougoing interface

– (*, 224.0.1.39) and (*, 224.0.1.40) are flagged with “D”, it means that these two groups has been joined using “Dense mode” for R4 to be able to learn dynamically about the RP using 224.0.1.40.

– (*, 239.255.1.1) This is the RPT, the shared tree used to join the RD in the first phase before switching to SPT, hence the “stopped” timer.

– (10.2.2.2, 224.0.1.40) this SPT is used between R4 and R2 (mapping agent) to forward 224.0.1.40 group multicast traffic that carries the information about the RP.

R4#sh ip pim rp

Group: 239.255.1.1, RP: 10.5.5.5, v2, v1, uptime 04:10:28, expires 00:02:17

R4# 

on R5 (RP):

R5(config-router)#do sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:46:31/00:03:29, RP 10.5.5.5, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/1, Forward/Sparse-Dense, 00:46:21/00:03:29

 

(10.10.10.1, 239.255.1.1), 00:46:31/00:02:35, flags: PT

Incoming interface: Serial1/1, RPF nbr 192.168.54.4

Outgoing interface list: Null

R5(config-router)# 

– (*, 239.255.1.1) this is the shared tree between the RP and all PIM-SM routers that wants to join the specific group 239.255.1.1.

– (10.10.10.1, 239.255.1.1) is the SPT built between R1 and R5.

 

2) RP REDUNDANCY

a) Primary RP failure

To simulate RP failure in this particular topology R5 loopback interface serving as RP IP address is shut down, so R5 will no more act as RP but R5 still forward the traffic.

Let’s analyze the debug outputs from various routers in the network:

R2 (the mapping agent):

Before primary RP (R5) failure:

R2(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1


Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 05:42:21, expires: 00:01:39


RP 10.3.3.3 (?), v2v1


Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 05:40:53, expires: 00:02:47

R2(config)# 

After shutting down R5 loopback0:

*Mar 1 09:25:14.685: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 09:25:14.689: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 09:25:14.693: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 09:25:14.697: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 09:26:07.821: Auto-RP(0): Mapping (224.0.0.0/4, RP:10.5.5.5) expired,

*Mar 1 09:26:07.857: Auto-RP(0): Build RP-Discovery packet

*Mar 1 09:26:07.857: Auto-RP: Build mapping (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1,

*Mar 1 09:26:07.861: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

*Mar 1 09:26:07.865: Auto-RP(0): Send RP-discovery packet on Serial1/1 (1 RP entries)

*Mar 1 09:26:07.869: Auto-RP(0): Send RP-discovery packet on Serial1/2 (1 RP entries)

R2 received R3 RP-announce messages, but haven’t not received R5 RP-announces for 3 x (announce interval) = 180 seconds, so mapping information about the group 239.255.1.1 is expired, consequently RP R3 is selected as the RP for the group and forwarded to all PIM-SM through RP-Discovery.

R1:

R1(config)#

*Mar 1 09:27:09.157: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:27:09.161: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

R1(config)#

 

R1(config)#do sh ip pim rp

Group: 239.255.1.1, RP: 10.3.3.3, v2, uptime 00:15:45, expires 00:02:06

R1(config)# 

R1 has received the RP-Discovery and updated RP information

R4:

R4#

*Mar 1 09:27:01.177: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:27:01.181: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

R4# 

 

R4#sh ip pim rp

Group: 239.255.1.1, RP: 10.3.3.3, v2, v1, uptime 00:16:58, expires 00:02:56

R4# 

The same for R4.

b) Back from failure

R2:

*Mar 1 09:51:04.429: Auto-RP(0): Received RP-announce, from 10.5.5.5, RP_cnt 1, ht 181

*Mar 1 09:51:04.437: Auto-RP(0): Added with (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 09:51:04.465: Auto-RP(0): Build RP-Discovery packet

*Mar 1 09:51:04.469: Auto-RP: Build mapping (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1,

*Mar 1 09:51:04.473: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

*Mar 1 09:51:04.477: Auto-RP(0): Send RP-discovery packet on Serial1/1 (1 RP entries)

*Mar 1 09:51:04.481: Auto-RP(0): Send RP-discovery packet on Serial1/2 (1 RP entries)

R2(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1


Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:00:53, expires: 00:02:02


RP 10.3.3.3 (?), v2v1


Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 06:08:25, expires: 00:02:15

R2(config)# 

The mapping agent is receiving again RP-announce from R5 (highest IP that actual RP address of R3) , so select it and send through RP-Discovery to all PIM-SM.

R1:

R1(config)#

*Mar 1 09:57:01.997: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:57:02.001: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R1(config)# 

 

R1(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1


Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:06:58, expires: 00:02:55

R1(config)# 

R4:

R4#

*Mar 1 09:59:53.349: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:59:53.353: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R4#

R4#

R4# sh ip pim rp mapp

PIM Group-to-RP Mappings

 

Group(s) 224.0.0.0/4

RP 10.5.5.5 (?), v2v1


Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:09:04, expires: 00:02:46

R4# 

R1 and R4 updates their group-to-RP mapping information.

 

3- Group and RP filtering policy

a) RP-based filtering

Each RP will be responsible for a particular group, R5 for 239.255.1.1 and R3 for 239.255.1.2

R3:

ip pim send-rp-announce Loopback0 scope 255 group-list RP_GROUP_ACL

!

!

ip access-list standard RP_GROUP_ACL

permit 239.255.1.2

deny any

R5:

ip pim send-rp-announce Loopback0 scope 255 group-list RP_GROUP_ACL

!

!

ip access-list standard RP_GROUP_ACL

permit 239.255.1.1

deny any

R2 (mapping agent):

*Mar 1 10:15:29.565: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 10:15:29.569: Auto-RP(0): Update (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:29.573: Auto-RP(0): Update (-224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:29.577: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 10:15:29.581: Auto-RP(0): Update (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:29.585: Auto-RP(0): Update (-224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:30.269: Auto-RP(0): Build RP-Discovery packet

*Mar 1 10:15:30.269: Auto-RP: Build mapping (-224.0.0.0/4, RP:10.5.5.5), PIMv2 v1,

*Mar 1 10:15:30.273: Auto-RP: Build mapping (239.255.1.1/32, RP:10.5.5.5), PIMv2 v1.

*Mar 1 10:15:30.277: Auto-RP: Build mapping (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1.

*Mar 1 10:15:30.285: Auto-RP(0): Send RP-discovery packet on Serial1/0 (2 RP entries)

*Mar 1 10:15:30.289: Auto-RP(0): Send RP-discovery packet on Serial1/1 (2 RP entries)

*Mar 1 10:15:30.293: Auto-RP(0): Send RP-discovery packet on Serial1/2 (2 RP entries)

R2 has received the RP-announce (on 224.0.1.39) from both RPs with the relevant groups, (-224.0.0.0/4) means that the RP is blocking all the remaining multicast traffic.

After populating it group-to-rp mapping information, R2 send it to all PIM-SM routers on 224.0.1.40

R2(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) (-)224.0.0.0/4


RP 10.5.5.5 (?), v2v1

Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:28:22, expires: 00:02:03

RP 10.3.3.3 (?), v2v1

Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 06:35:53, expires: 00:02:01

Group(s) 239.255.1.1/32


RP 10.5.5.5 (?), v2v1

Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:05:55, expires: 00:02:02

Group(s) 239.255.1.2/32


RP 10.3.3.3 (?), v2v1

Info source: 10.3.3.3 (?), elected via Auto-RP

Uptime: 00:05:56, expires: 00:01:58

R2(config)#

R4:

R4#

*Mar 1 10:16:22.505: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 2, ht 181

*Mar 1 10:16:22.509: Auto-RP(0): Update (-224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 10:16:22.513: Auto-RP(0): Update (239.255.1.1/32, RP:10.5.5.5), PIMv2 v1

*Mar 1 10:16:22.517: Auto-RP(0): Update (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1

R4# 

R4 has received RP-Discovery messages from R2 the mapping agent.

R4# sh ip pim rp mapp

PIM Group-to-RP Mappings

 

Group(s) (-)224.0.0.0/4


RP 10.5.5.5 (?), v2v1

Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:28:52, expires: 00:02:29

Group(s) 239.255.1.1/32


RP 10.5.5.5 (?), v2v1

Info source: 10.2.2.2 (?), elec ted via Auto-RP

Uptime: 00:06:25, expires: 00:02:27

Group(s) 239.255.1.2/32


RP 10.3.3.3 (?), v2v1

Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:06:26, expires: 00:02:29

R4# 

Now R4 has updated it group-to-rp mapping information as configured from the RP’s:

R5 will be responsible for the multicast group 239.255.1.1

R3 will be responsible for the multicast group 239.255.1.2

and any other multicast traffic will be dropped.

CONCLUSION

PIM Sparse-dense mode use Dense mode to multicast groups 224.0.1.39 and 224.0.1.40 traffic so PIM-SM routers can join them and dynamically learn about RPs to join other multicast groups this time using PIM-SM mechanism.

CGMP (Cisco Group Multicast Protocol)


The following flash animation shows the mechanism of CGMP between a multicast router a switch and a multicast client:

IGMPv2

%d bloggers like this: