Bidirectional PIM, Bidir-PIM


Overview

PIM-SM rely on RP (Rendez-Vous Point) to manage the forwarding of a group multicast traffic along a shared-Tree (*,G) (RPT) downstream from the RP to receivers and a Source Path Trees (SPT) between the RP and the PIM-SM router (to which the multicast source is connected). The number of those SPT grows with the number of multicast sources, which make PIM-SM not efficient for such network configuration.

Bidirectional PIM acts a little bit differently from PIM-SM, and particularly with SPT, simply there is no source-based trees, instead RP builds a shared tree through which source routers forward traffic downstream toward the RP in the same time RP can use the same shared tree to send the multicast traffic received upstream to a receiver; which make Bidir-PIM scale to any number of sources without any overhead.

Because there is no SPT, there could not be any SPT switchover and the traffic forwarding will always pass through RP.

Consequently Bidir-PIM routers will not perform any RPF (Reverse Path Forwarding) check that will prevent sending traffic back to the interface from which it was received, but as we know the RPF is used in a network with redundant links to avoid loops by checking whether the multicast traffic is received on an interface from which the multicast source is reachable.

Bidir-PIM use a concept of an elected Designated Forwarder (DF) that establishes a loop-free STP Shared Tree routed at the RP.

– On each point-to-point link and every network segment one DF is elected for every RP of Bidirectional group, the DF will be responsible for forwarding multicast traffic received on that network.

– The election of DF is based on the best unicast routing metric of the path to the RP, and consider only one path (no Assert messages) the order is as follow:

– Best Administrative Distance.

– Best Metric.

– And then highest IP address.

This way multiple routers can be elected as DF on a segment, one for each RP and one router can be elected DF on more that one interface.

Both Bidir-PIM and the traditional PIM-SM can perfectly coexist in the same PIM domain using the same RPs.

Figure 1 illustrates the network topology used to configure and analyse Bidirectional PIM.

Figure 1 Network Topology

topology1
The Lab is organized as follow:

Overview

Configuration

  • Traffic downstream along the shared tree
  • Traffic upstream along the shared tree
  • Add unidirectional PIM-SM group multicast traffic along with Bidirectional PIM traffic
  • Multicast another group 239.255.1.3 not assigned in both PIM access-lists

Configuration

1) Enable multicast routing

ip multicast-routing

2) Enable Bifirectional PIM

ip pim bidir-enable

2) Enable PIM (Sparse-dense mode) on all interfaces that can potentially handle multicast traffic

interface <int>
ip pim sparse-dense-mode

3) Configure the routing protocol and make sure that connectivity is successful and that any used loopback interfaces are advertised and reachable.

4) Depending ono your network you can configure RP and mapping agent in different routers or in the same router, as in PIM-SM the center of the multicast network is RP, that should be configured to work as Bidirectional RP by adding the keyword “bidir”

ip pim send-rp-announce <Loopback_int> scope <TTL> {group-list <ACL>} bidir

Do not use the same loopback interface for both RP and the mapping-agent, PIM routers expect different IP addresses for both functions.

ip pim send-rp-discovery scope <TTL> {<Loopback_int>}

Traffic downstream along the shared tree

In this case traffic flow is exactly the same as with PIM-SM because of the placement of the client in the topology, not in the path between the source and the RP.

The configuration is exactly the same as mentioned earlier with the RP configured as follow:

ip pim bidir-enable
ip multicast-routing
ip pim send-rp-announce
Loopback0
scope 32 bidir
ip pim send-rp-discovery
Loopback0
scope 32

R4 (RP & mapping agent):

R4#
*Mar 1 03:36:59.263: Auto-RP(0): Build RP-Discovery packet
*Mar 1 03:36:59.267: Auto-RP: Build mapping (224.0.0.0/4[bidir], RP:10.4.4.4), PIMv2 v1,
*Mar 1 03:36:59.271: Auto-RP(0): Send RP-discovery packet on Ethernet0/0 (1 RP entries)
*Mar 1 03:36:59.275: Auto-RP(0): Send RP-discovery packet on Ethernet0/1 (1 RP entries)
R4#

Router R4 builds RP-to-group mapping with the RP (itself) send it out of all interfaces to R2 and R5.

R2#mrinfo
172.16.12.2 [version 12.3] [flags: PMSA]:
172.16.12.2 -> 172.16.12.1 [1/0/pim/querier]
172.16.24.2 -> 172.16.24.4 [1/0/pim]
172.16.23.2 -> 0.0.0.0 [1/0/pim/querier/leaf]
R2#
R4#mrinfo
172.16.24.4 [version 12.3] [flags: PMSA]:
172.16.24.4 -> 172.16.24.2 [1/0/pim/querier]
172.16.45.4 -> 172.16.45.5 [1/0/pim]
10.4.4.4 -> 0.0.0.0 [1/0/pim/querier/leaf]
R4#
R5#mrinfo
172.16.45.5 [version 12.3] [flags: PMSA]:
192.168.40.5 -> 0.0.0.0 [1/0/pim/querier/leaf]
172.16.45.5 -> 172.16.45.4 [1/0/pim/querier]
R5#

mrinfo” gives information about PIM neighbor connectivity.

Because PIM is enabled on the loopback interface the router shows as an end-point is connected to it.

R2:

*Mar 1 03:25:50.555: Auto-RP(0): Received RP-discovery, from 172.16.24.4 , RP_cnt 1, ht 181
*Mar 1 03:25:50.559: Auto-RP(0): Update (224.0.0.0/4, RP:10.4.4.4), PIMv2 v1, bidi
R2#
R2#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:02:23, expires 00:01:59
R2#

R5:

*Mar 1 03:25:52.223: Auto-RP(0): Received RP-discovery, from 172.16.45.4 , RP_cnt 1, ht 181
*Mar 1 03:25:52.227: Auto-RP(0): Update (224.0.0.0/4, RP:10.4.4.4), PIMv2 v1, bidir
R5#
R5#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:02:13, expires 00:02:09
R5#

All PIM routers have already joined 224.0.1.39 224.0.1.40 and received the RP-Discovery from the RP (R4) and populated the RP-to-group record.

In our case the mapping-agent is not configured with a particular loopback interface, therefore will use the IP address of the outgoing interface, that’s why the MA IP address for R2 and R5 is different.

R5#mtrace 10.10.10.1 192.168.40.104 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.40.104
-1 192.168.40.5 PIM [10.10.10.0/24]
-2 172.16.45.4 PIM Reached RP/Core [10.10.10.0/24]

R5#

The result of “mtrace” shows the built shared tree with RP (R4) as root

R5#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 02:16:30/00:02:54, RP 10.4.4.4, flags: BC

Bidir-Upstream: Ethernet0/0, RPF nbr 172.16.45.4

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 00:21:22/00:02:25

Ethernet0/0, Bidir-Upstream/Sparse-Dense, 00:21:22/00:00:00

(*, 224.0.1.39), 02:14:36/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 02:14:36/00:00:00

(10.4.4.4, 224.0.1.39), 00:01:36/00:01:23, flags: PTX

Incoming interface: Ethernet0/0, RPF nbr 172.16.45.4

Outgoing interface list: Null

(*, 224.0.1.40), 02:16:31/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 02:16:22/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 02:16:32/00:00:00

(172.16.45.4, 224.0.1.40), 00:20:58/00:02:54, flags: LT

Incoming interface: Ethernet0/0, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 00:20:58/00:00:00

R5#

(*, 239.255.1.1) – This shared tree is build by the PIM receiver router, note that the interface E0/0 is considered an Bidirectional incoming interface as well as an outgoing interface which would not be possible with PIM-SM, this mean that Bidir-PIM receive multicast traffic of the group 239.255.1.1 on this interface and in the same time forward the traffic out of this same interface.

R1#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 10.10.10.2 435200 00:22:49
Ethernet0/1 10.4.4.4 172.16.12.2 409600 00:22:48
R1#
R2#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 172.16.12.2 409600 00:23:26
Ethernet0/1 10.4.4.4 172.16.24.4 0 00:23:27
Ethernet0/2 10.4.4.4 172.16.23.2 409600 00:23:26
R2#
R4#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 172.16.24.4 0 02:16:39
Ethernet0/1 10.4.4.4 172.16.45.4 0 01:00:05
Loopback0 10.4.4.4 10.4.4.4 0 02:16:39
R4#
R5#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/1 10.4.4.4 192.168.40.5 409600 00:24:41
Ethernet0/0 10.4.4.4 172.16.45.4 0 00:24:41
R5#

Figure 2 depicts the elected DF according to the output of “ip pim interface df” on R1, R2, R3.

Figure 2 elected DF on each segment:

df14

Traffic upstream along the shared tree
In this case a client connected to R3 (share a segment with the sourceR2-R4 in the path toward the RP) request the multicast traffic.

R1#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 10.10.10.2 435200 00:07:20
Ethernet0/1 10.4.4.4 172.16.12.2 409600 00:07:20
R1#
R2#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 172.16.12.2 409600 00:09:59
Ethernet0/1 10.4.4.4 172.16.24.4 0 00:09:59
Ethernet0/2 10.4.4.4 172.16.23.2 409600 00:09:59
R2#
R3#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/2 10.4.4.4 192.168.0.3 435200 00:02:50
Ethernet0/0 10.4.4.4 172.16.23.2 409600 00:02:51
R3#

Figure 2 depicts the elected DF according to the output of “ip pim interface df” on R1, R2, R3.

Figure 2 elected DF on each segment:

df2

R1#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:12:35, expires 00:02:17
R1#
R2#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:12:56, expires 00:02:57
R2#
R3#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:03:09, expires 00:02:48
R3#

All Bidir-PIM routers has dynamically received RP IP address through Auto-RP.

R3#mtrace 10.10.10.1 192.168.0.2 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 192.168.0.3 PIM [10.10.10.0/24]
-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.24.4 PIM Reached RP/Core [10.10.10.0/24]

R3#

Note that as in any shared tree the path from the receiver goes back up to the RP which is the root of the RPT.

R3#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 00:13:53/00:02:54, RP 10.4.4.4, flags: BC

Bidir-Upstream: Ethernet0/0, RPF nbr 172.16.23.2

Outgoing interface list:

Ethernet0/2, Forward/Sparse-Dense, 00:08:48/00:02:01

Ethernet0/0, Bidir-Upstream/Sparse-Dense, 00:08:48/00:00:00

(*, 224.0.1.39), 00:08:48/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:08:48/00:00:00

(10.4.4.4, 224.0.1.39), 00:00:48/00:02:11, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.23.2

Outgoing interface list: Null

(*, 224.0.1.40), 00:18:41/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:09:24/00:00:00

Ethernet0/2, Forward/Sparse-Dense, 00:18:42/00:00:00

(10.4.4.40, 224.0.1.40), 00:08:49/00:02:14, flags: LT

Incoming interface: Ethernet0/0, RPF nbr 172.16.23.2

Outgoing interface list:

Ethernet0/2, Forward/Sparse-Dense, 00:08:49/00:00:00

R3#

From the multicast routing table you can note that (*, 239.255.1.1) is the shared tree used by R3 to receive multicast traffic from the group 239.255.1.1, there is no (10.10.10.1,239.255.1.1) for the bidirectional group 239.255.1.1; all the remaining trees are related to the multicast group services 224.0.1.39 and 224.0.1.40.

R1:

R1#mtrace 10.10.10.1 192.168.0.2 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 172.16.23.3 PIM [10.10.10.0/24]
-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.24.4 PIM Reached RP/Core [10.10.10.0/24]

R1#

The same result from source and receiver PIM routers point of view, they both joined a shared tree that forward traffic in both direction upstream and downstream hence “Bidirectional” (figure 3).

Figure 3: Shared trees (RPT)

trees

R1#mrinfo
10.10.10.2 [version 12.3] [flags: PMSA]:
10.10.10.2 -> 0.0.0.0 [1/0/pim/querier/leaf]
172.16.12.1 -> 172.16.12.2 [1/0/pim]
R1#
R2#mrinfo
172.16.12.2 [version 12.3] [flags: PMSA]:
172.16.12.2 -> 172.16.12.1 [1/0/pim/querier]
172.16.24.2 -> 172.16.24.4 [1/0/pim]
172.16.23.2 -> 172.16.23.3 [1/0/pim]
R2#
R3#mrinfo
172.16.23.3 [version 12.3] [flags: PMSA]:
192.168.0.3 -> 0.0.0.0 [1/0/pim/querier/leaf]
172.16.23.3 -> 172.16.23.2 [1/0/pim/querier]
R3#

The command “mrinfo” is a good indicator of how PIM has localised PIM routers directly connected to either source or destination end points and forwarding routers.

Add unidirectional PIM-SM group multicast traffic along with Bidirectional PIM traffic

Any configuration statement about what group will use what PIM type have to be set on the RP:

Two distinct loopback interfaces should be configured for each PIM type, in our case: loopback0 for Bidirectional PIM that will route the multicast group traffic 239.255.1.1 and loopback2 for 239.255.1.2

ip pim send-rp-announce Loopback2 scope 32 group-list
UniPIM-Groups
ip pim send-rp-announce Loopback0 scope 32 group-list
BidirPIM-Groups
bidir
ip access-list standard UniPIM-Groups
permit 239.255.1.2
ip access-list standard BidirPIM-Groups
permit 239.255.1.1

And the same client connected to R3 will receive both groups 239.255.1.2 and 239.255.1.1

Let’s see how the RP (R4) handles the two type of PIM

R4(config)#do sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 00:40:57/00:02:50, RP 10.4.4.4, flags: B

Bidir-Upstream: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:40:38/00:02:50

(*, 239.255.1.2), 00:07:11/00:03:25, RP 10.4.4.144, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:01:03/00:03:25

(10.10.10.1, 239.255.1.2), 00:07:11/00:02:00, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.24.2

Outgoing interface list: Null

(*, 224.0.1.39), 01:08:28/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:09:40/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:08:28/00:00:00

Loopback1, Forward/Sparse-Dense, 01:08:48/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:08:48/00:00:00

Loopback0, Forward/Sparse-Dense, 01:08:48/00:00:00

(10.4.4.4, 224.0.1.39), 01:07:47/00:02:25, flags: LT

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:10:00/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:07:47/00:00:00

Loopback1, Forward/Sparse-Dense, 01:07:47/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:07:47/00:00:00

(10.4.4.144, 224.0.1.39), 00:09:53/00:02:07, flags: LT

Incoming interface: Loopback2, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:09:53/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 00:09:53/00:00:00

Loopback1, Forward/Sparse-Dense, 00:09:53/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 00:09:53/00:00:00

(*, 224.0.1.40), 01:08:52/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 01:08:38/00:00:00

Loopback1, Forward/Sparse-Dense, 01:08:56/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:08:58/00:00:00

(10.4.4.40, 224.0.1.40), 01:07:57/00:02:55, flags: LT

Incoming interface: Loopback1, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 01:07:57/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:07:57/00:00:00

R4(config)#

(*, 239.255.1.1) is the shared tree built by bidirectional PIM (flagged B) and routed at the RP itself (incoming interface = null), “Bidirectional” means that 239.255.1.1 traffic can be sent upstream to the RP and downstream from it in the same time.

(*, 239.255.1.2) is the shared tree built by unidirectional PIM Sparse mode (flagged S) and also routed at the RP (incoming interface = null), traffic 239.255.1.2 is sent only downstream from the RP toward receivers.

(10.10.10.1, 239.255.1.2), because 239.255.1.2 is routed by unidirectional PIM-SM, the RP register with the source PIM router and build a source-based tree SPT (S,G) through which the source send its multicast traffic ONLY downstream toward the RP. Note that the tree is flagged as “PT” which mean that PIM-SM has switched to SPT and traffic is no more forwarded through RP because there is a better path between the source and the destination.

R4#mtrace 10.10.10.1 192.168.0.2 239.255.1.2
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.2
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 172.16.23.3 PIM [10.10.10.0/24]
-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.12.1 PIM [10.10.10.0/24]

-4 10.10.10.1

R4#

Multicast traffic for the group 239.255.1.2 is originated from the source to the destination and forwarded by PIM-SM routers, which confirm that PIM-SM is source-based protocol.

R4#
R4#mtrace 10.10.10.1 192.168.0.2 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 172.16.23.3 PIM [10.10.10.0/24]

-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.24.4 PIM Reached RP/Core [10.10.10.0/24]

R4#

Multicast traffic for the group 239.255.1.1 is routed at the source because it build only shared tree, this illustrate perfectly the mechanism of bidirectional PIM

Multicast another group 239.255.1.3 not assigned in both PIM access-lists

R4#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 01:09:26/00:02:52, RP 10.4.4.4, flags: B

Bidir-Upstream: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 01:09:07/00:02:52

(*, 239.255.1.2), 00:35:41/00:02:31, RP 10.4.4.144, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:29:32/00:02:31

(10.10.10.1, 239.255.1.2), 00:35:41/00:01:14, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.24.2

Outgoing interface list: Null

(*, 239.255.1.3), 00:02:58/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 00:02:58/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 00:02:58/00:00:00

(10.10.10.1, 239.255.1.3), 00:02:58/00:00:01, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.24.2

Outgoing interface list:

Ethernet0/1, Prune/Sparse-Dense, 00:02:58/00:00:01

(*, 224.0.1.39), 01:36:58/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:38:10/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:36:58/00:00:00

Loopback1, Forward/Sparse-Dense, 01:36:58/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:36:58/00:00:00

Loopback0, Forward/Sparse-Dense, 01:36:58/00:00:00

(10.4.4.4, 224.0.1.39), 01:35:57/00:02:16, flags: LT

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:38:10/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:35:57/00:00:00

Loopback1, Forward/Sparse-Dense, 01:35:57/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:35:57/00:00:00

(10.4.4.144, 224.0.1.39), 00:38:03/00:02:56, flags: LT

Incoming interface: Loopback2, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:38:03/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 00:38:03/00:00:00

Loopback1, Forward/Sparse-Dense, 00:38:03/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 00:38:03/00:00:00

(*, 224.0.1.40), 01:37:03/00:02:56, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 01:36:40/00:00:00

Loopback1, Forward/Sparse-Dense, 01:36:58/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:37:01/00:00:00

R4#

Note that (*, 239.255.1.3) is flagged as “D” this mean that if a group not assigned in both PIM-SM and Bidir-SM group lists it uses Dense mode.

Conclusion

– PIM-Bidir is more efficient and more scalable than PIM-SM for large number of multicast sources.

– The DF (Dessignated Forwarder) is elected on each segment or point-to-point link to establish loop-free STTP with RP as the root.

– (S,G) join messages are dropped on routers that supports only Bidir-PIM.

– RP never join back a path to the source (no (S,G) tree) nor send any register-stop.

– PIM-Bidir is capable of sending and receiving multicast traffic for the same group along the same shared-tree.

Advertisements

PIM Assert message


The following topology (figure 1) is used to inspect ASSERT messages : a mechanism of PIM used for electing the router responsible for forwarding multicast traffic into a multi-access environment (LAN), where multiple possible gateway routers could coexist.

 

Figure 1:topology

The multicast source is sending traffic and the client is already listening and receiving group multicast 239.255.1.1 through R4.

 

 

The issue resides in the fact that after enabling PIM on both R3 and R4 router interfaces facing the LAN, only one of them have to take the responsibility for forwarding the multicast traffic for the SPT which is a separated tree for the couple (S,G), in our case (10.10.10.1, 192.168.40.104).

PIM handles this issue by organizing an “election” between the two routers, and the criteria of choice for the election is the best path back to the source of the multicast traffic.

Let’s take a look first at the routing table of both R3 and R4:

R3:

R3(config-if)#do sh ip route

1.0.0.0/24 is subnetted, 1 subnets

C 1.1.1.0 is directly connected, Serial1/0

2.0.0.0/24 is subnetted, 1 subnets

D 2.2.2.0 [90/30720] via 192.168.40.2, 02:36:12, FastEthernet0/0

C 192.168.40.0/24 is directly connected, FastEthernet0/0

10.0.0.0/24 is subnetted, 1 subnets

D 10.10.10.0 [90/33280] via 192.168.40.2, 02:36:12, FastEthernet0/0

R3(config-if)#

R4:

R4(config-if)#do sh ip route

1.0.0.0/24 is subnetted, 1 subnets

D 1.1.1.0 [90/2172416] via 192.168.40.1, 02:36:33, FastEthernet0/0

[90/2172416] via 2.2.2.1, 02:36:33, FastEthernet1/0

2.0.0.0/24 is subnetted, 1 subnets

C 2.2.2.0 is directly connected, FastEthernet1/0

C 192.168.40.0/24 is directly connected, FastEthernet0/0

10.0.0.0/24 is subnetted, 1 subnets

D 10.10.10.0 [90/30720] via 2.2.2.1, 02:36:32, FastEthernet1/0

R4(config-if)#

You can easily note that R4 has a better metric for the route to 10.10.10.1 than R3, 30720 against 33280, this is because R4 is connected though a Fast Ethernet link to R1 as opposed to a point-to-point serial link between R3 and R1.

We advertently shutdown R4 fa 0/0 interface; after the down status of R4 fa0/0 is confirmed (holddown timer expired), R3 fa0/0 became router responsible for forwarding the multicast traffic and managing IGMP on the LAN:

R3(config-if)#ip pim dense
R3(config-if)#

*Mar 1 03:19:20.475: %PIM-5-NBRCHG: neighbor 192.168.40.2 UP on interface FastEthernet0/0 (vrf default)

*Mar 1 03:19:20.503: %PIM-5-DRCHG: DR change from neighbor 0.0.0.0 to 192.168.40.2 on interface FastEthernet0/0 (vrf default)

*Mar 1 03:21:32.791: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 10: Neighbor 192.168.40.2 (FastEthernet0/0) is down: holding time expired

R3(config-if)#

R3(config-if)#

*Mar 1 03:22:34.379: %PIM-5-NBRCHG: neighbor 192.168.40.2 DOWN on interface FastEthernet0/0 (vrf default) DR

*Mar 1 03:22:34.383: %PIM-5-DRCHG: DR change from neighbor 192.168.40.2 to 192.168.40.1 on interface FastEthernet0/0 (vrf default)

R3(config-if)#

We got fa 0/0 on R4 back to live and R3 immediately receive an ASSERT message with better metrics (admin distance and cost of route) of the route to the multicast source 10.10.10.1; consequently R3 loose the election and put its fa0/0 in prune state also send a prune message to the upstream router R1 and will not receive the multicast traffic.

R3(config-if)#
*Mar 1 03:30:06.779: PIM(0): Received v2 Assert on FastEthernet0/0 from 192.168.40.2

*Mar 1 03:30:06.783: PIM(0): Assert metric to source 10.10.10.1 is [90/30720]

*Mar 1 03:30:06.783: PIM(0): We lose, our metric [90/2172416]

*Mar 1 03:30:06.787: PIM(0): Prune FastEthernet0/0/239.255.1.1 from (10.10.10.1/32, 239.255.1.1)

*Mar 1 03:30:06.791: PIM(0): Insert (10.10.10.1,239.255.1.1) prune in nbr 1.1.1.1’s queue

*Mar 1 03:30:06.795: PIM(0): (10.10.10.1/32, 239.255.1.1) oif FastEthernet0/0 in Prune state

*Mar 1 03:30:06.799: PIM(0): Building Join/Prune packet for nbr 1.1.1.1

*Mar 1 03:30:06.803: PIM(0): Adding v2 (10.10.10.1/32, 239.255.1.1) Prune

*Mar 1 03:30:06.803: PIM(0): Send v2 join/prune to 1.1.1.1 (Serial1/0)

*Mar 1 03:30:07.715: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 10: Neighbor 192.168.40.2 (FastEthernet0/0) is up: new adjacency

R3(config-if)#

The assert election criteria are as follow in decreasing order of priority:

1- administrative distance to the source S (10.10.10.1)

2- Cost of the route to S (10.10.10.1)

3- Highest multicast interface IP address.

– Let’s do the following experience: we will change the cost of the route to 10.10.10.1 by changing the bandwidth of the upstream interface fa1/0 on R4, for that we have to reset EIGRP neighbor relationship to force the DUAL calculation of new EIGRP routing table:

First let’s check the multicast status of interfaces on both routers:

R3:

R3(config-if)#do sh ip mroute
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:50:00/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Dense, 00:47:41/00:00:00

Serial1/0, Forward/Dense, 00:50:00/00:00:00

 

(10.10.10.1, 239.255.1.1), 00:09:46/00:02:32, flags: PT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.40.2

Outgoing interface list:


Serial1/0, Prune/Dense, 00:00:26/00:02:33

 

(*, 224.0.1.40), 01:53:58/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Dense, 00:36:55/00:00:00

Serial1/0, Forward/Dense, 01:53:58/00:00:00

 

R3(config-if)#

R3 is not forwarding the multicast for the LAN

R4:

R4(config-if)#do sh ip mroute
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:49:47/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Dense, 00:36:41/00:00:00

FastEthernet1/0, Forward/Dense, 00:49:47/00:00:00

 

(10.10.10.1, 239.255.1.1), 00:01:32/00:02:57, flags: T

Incoming interface: FastEthernet1/0, RPF nbr 2.2.2.1

Outgoing interface list:


FastEthernet0/0, Forward/Dense, 00:00:13/00:00:00

 

(*, 224.0.1.40), 01:51:28/00:02:40, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Dense, 00:36:41/00:00:00

FastEthernet1/0, Forward/Dense, 01:51:28/00:00:00

 

R4(config-if)#

R4 is the responsible for forwarding the multicast traffic for the LAN.

Now we changed the bandwidth and reset EIGRP neighbor relationship

R4(config-if)#int fa1/0
R4(config-if)#bandwidth 1000

R4(config-if)#do clear ip eigrp 10 neighbors

R4(config-if)#

*Mar 1 04:09:37.994: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 10: Neighbor 2.2.2.1 (FastEthernet1/0) is down: manually cleared

*Mar 1 04:09:38.018: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 10: Neighbor 192.168.40.1 (FastEthernet0/0) is down: manually cleared

*Mar 1 04:09:38.474: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 10: Neighbor 2.2.2.1 (FastEthernet1/0) is up: new adjacency

R4(config-if)#

*Mar 1 04:09:41.614: %DUAL-5-NBRCHANGE: IP-EIGRP(0) 10: Neighbor 192.168.40.1 (FastEthernet0/0) is up: new adjacency

R4(config-if)#

let’s take a look at the new status of interfaces:

R4:

R4(config-if)#do sh ip mroute
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:54:41/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Dense, 00:41:35/00:00:00

FastEthernet1/0, Forward/Dense, 00:54:41/00:00:00

 

(10.10.10.1, 239.255.1.1), 00:06:26/00:00:44, flags: PTX

Incoming interface: FastEthernet0/0, RPF nbr 192.168.40.1

Outgoing interface list:


FastEthernet1/0, Prune/Dense, 00:02:02/00:00:57

 

(*, 224.0.1.40), 01:56:21/00:02:46, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Dense, 00:41:35/00:00:00

FastEthernet1/0, Forward/Dense, 01:56:21/00:00:00

 

R4(config-if)#

Now R4 is no more forwarding the multicast traffic for the group 239.255.1.1

R3:

R3(config-if)#do sh ip mroute
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:55:10/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Dense, 00:52:51/00:00:00

Serial1/0, Forward/Dense, 00:55:10/00:00:00

 

(10.10.10.1, 239.255.1.1), 00:02:37/00:02:53, flags: T

Incoming interface: Serial1/0, RPF nbr 1.1.1.1

Outgoing interface list:


FastEthernet0/0, Forward/Dense, 00:02:36/00:00:00, A

 

(*, 224.0.1.40), 01:59:08/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Dense, 00:42:05/00:00:00

Serial1/0, Forward/Dense, 01:59:08/00:00:00

 

R3(config-if)#

R3 is forwarding the multicast traffic for the group 239.255.1.1

And this confirmed by the routing tables of both routers R3 and R4:

R4:

R4(config-if)#do sh ip route 10.10.10.1
Routing entry for 10.10.10.0/24

Known via “eigrp 10”, distance 90, metric 2174976, type internal

Redistributing via eigrp 10

Last update from 192.168.40.1 on FastEthernet0/0, 00:02:07 ago

Routing Descriptor Blocks:

* 192.168.40.1, from 192.168.40.1, 00:02:07 ago, via FastEthernet0/0


Route metric is 2174976, traffic share count is 1

Total delay is 20200 microseconds, minimum bandwidth is 1544 Kbit

Reliability 255/255, minimum MTU 1500 bytes

Loading 1/255, Hops 2

 

R4(config-if)#

R3:

R3(config-if)#do sh ip route 10.10.10.1
Routing entry for 10.10.10.0/24

Known via “eigrp 10”, distance 90, metric 2172416, type internal

Redistributing via eigrp 10

Last update from 1.1.1.1 on Serial1/0, 00:01:37 ago

Routing Descriptor Blocks:

* 1.1.1.1, from 1.1.1.1, 00:01:37 ago, via Serial1/0


Route metric is 2172416, traffic share count is 1

Total delay is 20100 microseconds, minimum bandwidth is 1544 Kbit

Reliability 255/255, minimum MTU 1500 bytes

Loading 1/255, Hops 1

 

R3(config-if)#

CONCLUSION:

So we have seen how ASSERT message allows to avoid the confusion between two gateway routers for the same LAN through election, and the criteria are the following:

  • Administrative distance of the route to the multicast source.
  • The cost of that route.
  • The higher IP address.

PIM over multi-access, Prune override


The Figure 1 illustrates the topology used to inspect the PIM prune override message.

Figure 1: topology

Both Client 1 and client 2 are receiving the multicast traffic and both R3 e0/1 and R4 e0/1 interfaces are in PIM-Dense forwarding mode as shown here:

R3:

R3#sh ip mroute

*Mar 1 00:11:30.911: %SYS-5-CONFIG_I: Configured from console by admin on console

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:11:16/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:11:16/00:00:00

Ethernet0/0, Forward/Dense, 00:11:16/00:00:00

 

(10.10.10.1, 239.255.1.1), 00:08:08/00:02:52, flags: T

Incoming interface: Ethernet0/1, RPF nbr 192.168.41.2

Outgoing interface list:

Ethernet0/0, Forward/Dense, 00:08:08/00:00:00

 

(*, 224.0.1.40), 00:11:20/00:02:32, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:11:18/00:00:00

Ethernet0/0, Forward/Dense, 00:11:20/00:00:00

 

R3#

R4:

R4#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:10:48/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Dense, 00:06:46/00:00:00

Ethernet0/1, Forward/Dense, 00:10:48/00:00:00

 

(10.10.10.1, 239.255.1.1), 00:07:34/00:02:59, flags: LT

Incoming interface: Ethernet0/1, RPF nbr 192.168.41.2

Outgoing interface list:

Ethernet0/0, Forward/Dense, 00:06:46/00:00:00

 

(*, 224.0.1.40), 00:10:49/00:02:00, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:10:43/00:00:00

Ethernet0/0, Forward/Dense, 00:10:49/00:00:00

 

(*, 239.255.255.250), 00:10:46/00:02:09, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:10:45/00:00:00

Ethernet0/0, Forward/Dense, 00:10:48/00:00:00

 

R4#

Now let’s suppose that one of the clients, Client1 for example, no more wants to receive the multicast traffic; to inspect PIM behavior we turned on debugging of PIM messages on R3, R4 and R2:

R3#

*Mar 1 00:24:29.487: PIM(0): Insert (10.10.10.1,239.255.1.1) prune in nbr 192.168.41.2’s queue

*Mar 1 00:24:29.495: PIM(0): Building Join/Prune packet for nbr 192.168.41.2

*Mar 1 00:24:29.499: PIM(0): Adding v2 (10.10.10.1/32, 239.255.1.1) Prune

*Mar 1 00:24:29.503: PIM(0): Send v2 join/prune to 192.168.41.2 (Ethernet0/1)

*Mar 1 00:24:32.479: PIM(0): Received v2 Join/Prune on Ethernet0/1 from 192.168.41.4, not to us

*Mar 1 00:24:32.483: PIM(0): Join-list: (10.10.10.1/32, 239.255.1.1)

R3#

R3 immediately build a prune message to send to the upstream router R2 to prune the interface e0/1 and exclude it from the SPT (10.10.10.1, 239.255.1.1) and flagged with PT.

The last two message are the reaction of R4 to this prune message, it sent a join message to R2 warning that it still wants to receive the multicast traffic.

R3#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:26:20/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:26:20/00:00:00

 

(10.10.10.1, 239.255.1.1), 00:23:12/00:00:50, flags: PT

Incoming interface: Ethernet0/1, RPF nbr 192.168.41.2


Outgoing interface list: Null

 

(*, 224.0.1.40), 00:26:23/00:02:35, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Dense, 00:26:21/00:00:00

Ethernet0/0, Forward/Dense, 00:26:23/00:00:00

 

R3#

So the work is done from R3 point of view, but the network connecting R2, R3 and R4 is multi-access (Ethernet) therefore R2, the upstream router, could stop forwarding the multicast traffic out of its e0/1 interface and prevent R4 and its client client2 from the receiving it.

From R4 point of view:

R4#

*Mar 1 00:24:33.315: PIM(0): Received v2 Join/Prune on Ethernet0/1 from 192.168.41.3, not to us

*Mar 1 00:24:33.319: PIM(0): Prune-list: (10.10.10.1/32, 239.255.1.1)

*Mar 1 00:24:33.323: PIM(0): Set join delay timer to 3000 msec for (10.10.10.1/32, 239.255.1.1) on Ethernet0/1

*Mar 1 00:24:36.231: PIM(0): Insert (10.10.10.1,239.255.1.1) join in nbr 192.168.41.2’s queue

*Mar 1 00:24:36.235: PIM(0): Building Join/Prune packet for nbr 192.168.41.2

*Mar 1 00:24:36.239: PIM(0): Adding v2 (10.10.10.1/32, 239.255.1.1) Join

*Mar 1 00:24:36.243: PIM(0): Send v2 join/prune to 192.168.41.2 (Ethernet0/1)

R4#

 

R4 has received the prune message sent by R3 to R2 (figure 2),this is indicated by “not to us”, because it is Ethernet, and know that if it will not react in the join delay timer of 3 seconds, R2 will stop sending the multicast traffic, so it send a so called prune override message (figure 3) which is a simple PIM join message to R2 telling him that he still wants to join the SPT (10.10.10.1, 239.255.1.1).

So R2 received prune message from R2 and a join message from R4 and update the forwarding state of e0/1:

R2#

*Mar 1 00:24:21.223: PIM(0): Received v2 Join/Prune on Ethernet0/1 from 192.168.41.3, to us

*Mar 1 00:24:21.227: PIM(0): Prune-list: (10.10.10.1/32, 239.255.1.1)

*Mar 1 00:24:24.203: PIM(0): Received v2 Join/Prune on Ethernet0/1 from 192.168.41.4, to us

*Mar 1 00:24:24.207: PIM(0): Join-list: (10.10.10.1/32, 239.255.1.1)

*Mar 1 00:24:24.211: PIM(0): Update Ethernet0/1/192.168.41.4 to (10.10.10.1, 239.255.1.1), Forward state, by PIM SG Join

R2#

 

Figure 2 : prune from R3 to R2

Figure 3 : prune override from R4 to R2:

 

As a result, the multicast forwarding keep going on through the multi-access between R2 and R4.

R2#mstat 10.10.10.1 192.168.44.4 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.44.4 via group 239.255.1.1

From source (?) to destination (?)

Waiting to accumulate statistics……

Results after 10 seconds:

 

Source Response Dest Packet Statistics For Only For Traffic

10.10.10.1 1.1.1.2 All Multicast Traffic From 10.10.10.1

| __/ rtt 136 ms Lost/Sent = Pct Rate To 239.255.1.1

v / hop -11 s ——————— ——————–

10.10.10.4

1.1.1.1 ?

| ^ ttl 0

v | hop 11 s -7/92 = –% 9 pps -4/92 = –% 9 pps

1.1.1.2

192.168.41.2 ?

| ^ ttl 1

v | hop -11 s -6/96 = –% 9 pps 0/96 = 0% 9 pps

192.168.41.4 ?

| \__ ttl 2

v \ hop 11 s 0 0 pps 96 9 pps

192.168.44.4 1.1.1.2

Receiver Query Source

 

R2#

Here is the configurations commands needed for this lab:

R1:

ip multicast-routing

!

interface Ethernet0/0

ip address 10.10.10.4 255.255.255.0

ip pim dense-mode

!

interface Ethernet0/1

ip address 1.1.1.1 255.255.255.0

ip pim dense-mode

!

!

router eigrp 10

network 1.1.1.0 0.0.0.255

network 10.10.10.0 0.0.0.255

no auto-summary

R2:

ip multicast-routing

!

interface Ethernet0/0

ip address 1.1.1.2 255.255.255.0

ip pim dense-mode

!

interface Ethernet0/1

ip address 192.168.41.2 255.255.255.0

ip pim dense-mode

!

!

router eigrp 10

network 1.1.1.0 0.0.0.255

network 192.168.41.0

no auto-summary

R3:

ip multicast-routing

!

interface Ethernet0/0

ip address 192.168.40.1 255.255.255.0

ip pim dense-mode

!

interface Ethernet0/1

ip address 192.168.41.3 255.255.255.0

ip pim dense-mode

!

!

router eigrp 10

network 192.168.40.0

network 192.168.41.0

no auto-summary

R4:

ip multicast-routing

!

interface Ethernet0/0

ip address 192.168.44.44 255.255.255.0

ip pim dense-mode

!

interface Ethernet0/1

ip address 192.168.41.4 255.255.255.0

ip pim dense-mode

!

!

router eigrp 10

network 192.168.41.0

network 192.168.44.0

no auto-summary

PIM Sparse Mode with Auto-RP


The following flash animation shows the process of PIM Sparse Mode with automatic Rendez-vous Point Auto-RP:

Click the figure to open the swf animation.

IGMPv2

 

 

 

 

 

 

 

 

 

Below an attempt to embedd the flash content locally.

%d bloggers like this: