PIM over multi-access, Prune override


The Figure 1 illustrates the topology used to inspect the PIM prune override message.

Figure 1: topology

Both Client 1 and client 2 are receiving the multicast traffic and both R3 e0/1 and R4 e0/1 interfaces are in PIM-Dense forwarding mode as shown here:

R3:

R3#sh ip mroute

*Mar 1 00:11:30.911: %SYS-5-CONFIG_I: Configured from console by admin on console

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:11:16/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:11:16/00:00:00

Ethernet0/0, Forward/Dense, 00:11:16/00:00:00

 

(10.10.10.1, 239.255.1.1), 00:08:08/00:02:52, flags: T

Incoming interface: Ethernet0/1, RPF nbr 192.168.41.2

Outgoing interface list:

Ethernet0/0, Forward/Dense, 00:08:08/00:00:00

 

(*, 224.0.1.40), 00:11:20/00:02:32, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:11:18/00:00:00

Ethernet0/0, Forward/Dense, 00:11:20/00:00:00

 

R3#

R4:

R4#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:10:48/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Dense, 00:06:46/00:00:00

Ethernet0/1, Forward/Dense, 00:10:48/00:00:00

 

(10.10.10.1, 239.255.1.1), 00:07:34/00:02:59, flags: LT

Incoming interface: Ethernet0/1, RPF nbr 192.168.41.2

Outgoing interface list:

Ethernet0/0, Forward/Dense, 00:06:46/00:00:00

 

(*, 224.0.1.40), 00:10:49/00:02:00, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:10:43/00:00:00

Ethernet0/0, Forward/Dense, 00:10:49/00:00:00

 

(*, 239.255.255.250), 00:10:46/00:02:09, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:10:45/00:00:00

Ethernet0/0, Forward/Dense, 00:10:48/00:00:00

 

R4#

Now let’s suppose that one of the clients, Client1 for example, no more wants to receive the multicast traffic; to inspect PIM behavior we turned on debugging of PIM messages on R3, R4 and R2:

R3#

*Mar 1 00:24:29.487: PIM(0): Insert (10.10.10.1,239.255.1.1) prune in nbr 192.168.41.2’s queue

*Mar 1 00:24:29.495: PIM(0): Building Join/Prune packet for nbr 192.168.41.2

*Mar 1 00:24:29.499: PIM(0): Adding v2 (10.10.10.1/32, 239.255.1.1) Prune

*Mar 1 00:24:29.503: PIM(0): Send v2 join/prune to 192.168.41.2 (Ethernet0/1)

*Mar 1 00:24:32.479: PIM(0): Received v2 Join/Prune on Ethernet0/1 from 192.168.41.4, not to us

*Mar 1 00:24:32.483: PIM(0): Join-list: (10.10.10.1/32, 239.255.1.1)

R3#

R3 immediately build a prune message to send to the upstream router R2 to prune the interface e0/1 and exclude it from the SPT (10.10.10.1, 239.255.1.1) and flagged with PT.

The last two message are the reaction of R4 to this prune message, it sent a join message to R2 warning that it still wants to receive the multicast traffic.

R3#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:26:20/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:26:20/00:00:00

 

(10.10.10.1, 239.255.1.1), 00:23:12/00:00:50, flags: PT

Incoming interface: Ethernet0/1, RPF nbr 192.168.41.2


Outgoing interface list: Null

 

(*, 224.0.1.40), 00:26:23/00:02:35, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:26:21/00:00:00

Ethernet0/0, Forward/Dense, 00:26:23/00:00:00

 

R3#

So the work is done from R3 point of view, but the network connecting R2, R3 and R4 is multi-access (Ethernet) therefore R2, the upstream router, could stop forwarding the multicast traffic out of its e0/1 interface and prevent R4 and its client client2 from the receiving it.

From R4 point of view:

R4#

*Mar 1 00:24:33.315: PIM(0): Received v2 Join/Prune on Ethernet0/1 from 192.168.41.3, not to us

*Mar 1 00:24:33.319: PIM(0): Prune-list: (10.10.10.1/32, 239.255.1.1)

*Mar 1 00:24:33.323: PIM(0): Set join delay timer to 3000 msec for (10.10.10.1/32, 239.255.1.1) on Ethernet0/1

*Mar 1 00:24:36.231: PIM(0): Insert (10.10.10.1,239.255.1.1) join in nbr 192.168.41.2’s queue

*Mar 1 00:24:36.235: PIM(0): Building Join/Prune packet for nbr 192.168.41.2

*Mar 1 00:24:36.239: PIM(0): Adding v2 (10.10.10.1/32, 239.255.1.1) Join

*Mar 1 00:24:36.243: PIM(0): Send v2 join/prune to 192.168.41.2 (Ethernet0/1)

R4#

 

R4 has received the prune message sent by R3 to R2 (figure 2),this is indicated by “not to us”, because it is Ethernet, and know that if it will not react in the join delay timer of 3 seconds, R2 will stop sending the multicast traffic, so it send a so called prune override message (figure 3) which is a simple PIM join message to R2 telling him that he still wants to join the SPT (10.10.10.1, 239.255.1.1).

So R2 received prune message from R2 and a join message from R4 and update the forwarding state of e0/1:

R2#

*Mar 1 00:24:21.223: PIM(0): Received v2 Join/Prune on Ethernet0/1 from 192.168.41.3, to us

*Mar 1 00:24:21.227: PIM(0): Prune-list: (10.10.10.1/32, 239.255.1.1)

*Mar 1 00:24:24.203: PIM(0): Received v2 Join/Prune on Ethernet0/1 from 192.168.41.4, to us

*Mar 1 00:24:24.207: PIM(0): Join-list: (10.10.10.1/32, 239.255.1.1)

*Mar 1 00:24:24.211: PIM(0): Update Ethernet0/1/192.168.41.4 to (10.10.10.1, 239.255.1.1), Forward state, by PIM SG Join

R2#

 

Figure 2 : prune from R3 to R2

Figure 3 : prune override from R4 to R2:

 

As a result, the multicast forwarding keep going on through the multi-access between R2 and R4.

R2#mstat 10.10.10.1 192.168.44.4 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.44.4 via group 239.255.1.1

From source (?) to destination (?)

Waiting to accumulate statistics……

Results after 10 seconds:

 

Source Response Dest Packet Statistics For Only For Traffic

10.10.10.1 1.1.1.2 All Multicast Traffic From 10.10.10.1

| __/ rtt 136 ms Lost/Sent = Pct Rate To 239.255.1.1

v / hop -11 s ——————— ——————–

10.10.10.4

1.1.1.1 ?

| ^ ttl 0

v | hop 11 s -7/92 = –% 9 pps -4/92 = –% 9 pps

1.1.1.2

192.168.41.2 ?

| ^ ttl 1

v | hop -11 s -6/96 = –% 9 pps 0/96 = 0% 9 pps

192.168.41.4 ?

| \__ ttl 2

v \ hop 11 s 0 0 pps 96 9 pps

192.168.44.4 1.1.1.2

Receiver Query Source

 

R2#

Here is the configurations commands needed for this lab:

R1:

ip multicast-routing

!

interface Ethernet0/0

ip address 10.10.10.4 255.255.255.0

ip pim dense-mode

!

interface Ethernet0/1

ip address 1.1.1.1 255.255.255.0

ip pim dense-mode

!

!

router eigrp 10

network 1.1.1.0 0.0.0.255

network 10.10.10.0 0.0.0.255

no auto-summary

R2:

ip multicast-routing

!

interface Ethernet0/0

ip address 1.1.1.2 255.255.255.0

ip pim dense-mode

!

interface Ethernet0/1

ip address 192.168.41.2 255.255.255.0

ip pim dense-mode

!

!

router eigrp 10

network 1.1.1.0 0.0.0.255

network 192.168.41.0

no auto-summary

R3:

ip multicast-routing

!

interface Ethernet0/0

ip address 192.168.40.1 255.255.255.0

ip pim dense-mode

!

interface Ethernet0/1

ip address 192.168.41.3 255.255.255.0

ip pim dense-mode

!

!

router eigrp 10

network 192.168.40.0

network 192.168.41.0

no auto-summary

R4:

ip multicast-routing

!

interface Ethernet0/0

ip address 192.168.44.44 255.255.255.0

ip pim dense-mode

!

interface Ethernet0/1

ip address 192.168.41.4 255.255.255.0

ip pim dense-mode

!

!

router eigrp 10

network 192.168.41.0

network 192.168.44.0

no auto-summary

Advertisements
%d bloggers like this: