Bidirectional PIM, Bidir-PIM


Overview

PIM-SM rely on RP (Rendez-Vous Point) to manage the forwarding of a group multicast traffic along a shared-Tree (*,G) (RPT) downstream from the RP to receivers and a Source Path Trees (SPT) between the RP and the PIM-SM router (to which the multicast source is connected). The number of those SPT grows with the number of multicast sources, which make PIM-SM not efficient for such network configuration.

Bidirectional PIM acts a little bit differently from PIM-SM, and particularly with SPT, simply there is no source-based trees, instead RP builds a shared tree through which source routers forward traffic downstream toward the RP in the same time RP can use the same shared tree to send the multicast traffic received upstream to a receiver; which make Bidir-PIM scale to any number of sources without any overhead.

Because there is no SPT, there could not be any SPT switchover and the traffic forwarding will always pass through RP.

Consequently Bidir-PIM routers will not perform any RPF (Reverse Path Forwarding) check that will prevent sending traffic back to the interface from which it was received, but as we know the RPF is used in a network with redundant links to avoid loops by checking whether the multicast traffic is received on an interface from which the multicast source is reachable.

Bidir-PIM use a concept of an elected Designated Forwarder (DF) that establishes a loop-free STP Shared Tree routed at the RP.

– On each point-to-point link and every network segment one DF is elected for every RP of Bidirectional group, the DF will be responsible for forwarding multicast traffic received on that network.

– The election of DF is based on the best unicast routing metric of the path to the RP, and consider only one path (no Assert messages) the order is as follow:

– Best Administrative Distance.

– Best Metric.

– And then highest IP address.

This way multiple routers can be elected as DF on a segment, one for each RP and one router can be elected DF on more that one interface.

Both Bidir-PIM and the traditional PIM-SM can perfectly coexist in the same PIM domain using the same RPs.

Figure 1 illustrates the network topology used to configure and analyse Bidirectional PIM.

Figure 1 Network Topology

topology1
The Lab is organized as follow:

Overview

Configuration

  • Traffic downstream along the shared tree
  • Traffic upstream along the shared tree
  • Add unidirectional PIM-SM group multicast traffic along with Bidirectional PIM traffic
  • Multicast another group 239.255.1.3 not assigned in both PIM access-lists

Configuration

1) Enable multicast routing

ip multicast-routing

2) Enable Bifirectional PIM

ip pim bidir-enable

2) Enable PIM (Sparse-dense mode) on all interfaces that can potentially handle multicast traffic

interface <int>
ip pim sparse-dense-mode

3) Configure the routing protocol and make sure that connectivity is successful and that any used loopback interfaces are advertised and reachable.

4) Depending ono your network you can configure RP and mapping agent in different routers or in the same router, as in PIM-SM the center of the multicast network is RP, that should be configured to work as Bidirectional RP by adding the keyword “bidir”

ip pim send-rp-announce <Loopback_int> scope <TTL> {group-list <ACL>} bidir

Do not use the same loopback interface for both RP and the mapping-agent, PIM routers expect different IP addresses for both functions.

ip pim send-rp-discovery scope <TTL> {<Loopback_int>}

Traffic downstream along the shared tree

In this case traffic flow is exactly the same as with PIM-SM because of the placement of the client in the topology, not in the path between the source and the RP.

The configuration is exactly the same as mentioned earlier with the RP configured as follow:

ip pim bidir-enable
ip multicast-routing
ip pim send-rp-announce
Loopback0
scope 32 bidir
ip pim send-rp-discovery
Loopback0
scope 32

R4 (RP & mapping agent):

R4#
*Mar 1 03:36:59.263: Auto-RP(0): Build RP-Discovery packet
*Mar 1 03:36:59.267: Auto-RP: Build mapping (224.0.0.0/4[bidir], RP:10.4.4.4), PIMv2 v1,
*Mar 1 03:36:59.271: Auto-RP(0): Send RP-discovery packet on Ethernet0/0 (1 RP entries)
*Mar 1 03:36:59.275: Auto-RP(0): Send RP-discovery packet on Ethernet0/1 (1 RP entries)
R4#

Router R4 builds RP-to-group mapping with the RP (itself) send it out of all interfaces to R2 and R5.

R2#mrinfo
172.16.12.2 [version 12.3] [flags: PMSA]:
172.16.12.2 -> 172.16.12.1 [1/0/pim/querier]
172.16.24.2 -> 172.16.24.4 [1/0/pim]
172.16.23.2 -> 0.0.0.0 [1/0/pim/querier/leaf]
R2#
R4#mrinfo
172.16.24.4 [version 12.3] [flags: PMSA]:
172.16.24.4 -> 172.16.24.2 [1/0/pim/querier]
172.16.45.4 -> 172.16.45.5 [1/0/pim]
10.4.4.4 -> 0.0.0.0 [1/0/pim/querier/leaf]
R4#
R5#mrinfo
172.16.45.5 [version 12.3] [flags: PMSA]:
192.168.40.5 -> 0.0.0.0 [1/0/pim/querier/leaf]
172.16.45.5 -> 172.16.45.4 [1/0/pim/querier]
R5#

mrinfo” gives information about PIM neighbor connectivity.

Because PIM is enabled on the loopback interface the router shows as an end-point is connected to it.

R2:

*Mar 1 03:25:50.555: Auto-RP(0): Received RP-discovery, from 172.16.24.4 , RP_cnt 1, ht 181
*Mar 1 03:25:50.559: Auto-RP(0): Update (224.0.0.0/4, RP:10.4.4.4), PIMv2 v1, bidi
R2#
R2#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:02:23, expires 00:01:59
R2#

R5:

*Mar 1 03:25:52.223: Auto-RP(0): Received RP-discovery, from 172.16.45.4 , RP_cnt 1, ht 181
*Mar 1 03:25:52.227: Auto-RP(0): Update (224.0.0.0/4, RP:10.4.4.4), PIMv2 v1, bidir
R5#
R5#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:02:13, expires 00:02:09
R5#

All PIM routers have already joined 224.0.1.39 224.0.1.40 and received the RP-Discovery from the RP (R4) and populated the RP-to-group record.

In our case the mapping-agent is not configured with a particular loopback interface, therefore will use the IP address of the outgoing interface, that’s why the MA IP address for R2 and R5 is different.

R5#mtrace 10.10.10.1 192.168.40.104 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.40.104
-1 192.168.40.5 PIM [10.10.10.0/24]
-2 172.16.45.4 PIM Reached RP/Core [10.10.10.0/24]

R5#

The result of “mtrace” shows the built shared tree with RP (R4) as root

R5#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 02:16:30/00:02:54, RP 10.4.4.4, flags: BC

Bidir-Upstream: Ethernet0/0, RPF nbr 172.16.45.4

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 00:21:22/00:02:25

Ethernet0/0, Bidir-Upstream/Sparse-Dense, 00:21:22/00:00:00

(*, 224.0.1.39), 02:14:36/stopped, RP 0.0.0.0, flags: DC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 02:14:36/00:00:00

(10.4.4.4, 224.0.1.39), 00:01:36/00:01:23, flags: PTX

Incoming interface: Ethernet0/0, RPF nbr 172.16.45.4

Outgoing interface list: Null

(*, 224.0.1.40), 02:16:31/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 02:16:22/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 02:16:32/00:00:00

(172.16.45.4, 224.0.1.40), 00:20:58/00:02:54, flags: LT

Incoming interface: Ethernet0/0, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 00:20:58/00:00:00

R5#

(*, 239.255.1.1) – This shared tree is build by the PIM receiver router, note that the interface E0/0 is considered an Bidirectional incoming interface as well as an outgoing interface which would not be possible with PIM-SM, this mean that Bidir-PIM receive multicast traffic of the group 239.255.1.1 on this interface and in the same time forward the traffic out of this same interface.

R1#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 10.10.10.2 435200 00:22:49
Ethernet0/1 10.4.4.4 172.16.12.2 409600 00:22:48
R1#
R2#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 172.16.12.2 409600 00:23:26
Ethernet0/1 10.4.4.4 172.16.24.4 0 00:23:27
Ethernet0/2 10.4.4.4 172.16.23.2 409600 00:23:26
R2#
R4#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 172.16.24.4 0 02:16:39
Ethernet0/1 10.4.4.4 172.16.45.4 0 01:00:05
Loopback0 10.4.4.4 10.4.4.4 0 02:16:39
R4#
R5#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/1 10.4.4.4 192.168.40.5 409600 00:24:41
Ethernet0/0 10.4.4.4 172.16.45.4 0 00:24:41
R5#

Figure 2 depicts the elected DF according to the output of “ip pim interface df” on R1, R2, R3.

Figure 2 elected DF on each segment:

df14

Traffic upstream along the shared tree
In this case a client connected to R3 (share a segment with the sourceR2-R4 in the path toward the RP) request the multicast traffic.

R1#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 10.10.10.2 435200 00:07:20
Ethernet0/1 10.4.4.4 172.16.12.2 409600 00:07:20
R1#
R2#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/0 10.4.4.4 172.16.12.2 409600 00:09:59
Ethernet0/1 10.4.4.4 172.16.24.4 0 00:09:59
Ethernet0/2 10.4.4.4 172.16.23.2 409600 00:09:59
R2#
R3#sh ip pim int df
Interface RP DF Winner Metric Uptime
Ethernet0/2 10.4.4.4 192.168.0.3 435200 00:02:50
Ethernet0/0 10.4.4.4 172.16.23.2 409600 00:02:51
R3#

Figure 2 depicts the elected DF according to the output of “ip pim interface df” on R1, R2, R3.

Figure 2 elected DF on each segment:

df2

R1#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:12:35, expires 00:02:17
R1#
R2#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:12:56, expires 00:02:57
R2#
R3#sh ip pim rp
Group: 239.255.1.1, RP: 10.4.4.4, v2, v1, uptime 00:03:09, expires 00:02:48
R3#

All Bidir-PIM routers has dynamically received RP IP address through Auto-RP.

R3#mtrace 10.10.10.1 192.168.0.2 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 192.168.0.3 PIM [10.10.10.0/24]
-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.24.4 PIM Reached RP/Core [10.10.10.0/24]

R3#

Note that as in any shared tree the path from the receiver goes back up to the RP which is the root of the RPT.

R3#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 00:13:53/00:02:54, RP 10.4.4.4, flags: BC

Bidir-Upstream: Ethernet0/0, RPF nbr 172.16.23.2

Outgoing interface list:

Ethernet0/2, Forward/Sparse-Dense, 00:08:48/00:02:01

Ethernet0/0, Bidir-Upstream/Sparse-Dense, 00:08:48/00:00:00

(*, 224.0.1.39), 00:08:48/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:08:48/00:00:00

(10.4.4.4, 224.0.1.39), 00:00:48/00:02:11, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.23.2

Outgoing interface list: Null

(*, 224.0.1.40), 00:18:41/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:09:24/00:00:00

Ethernet0/2, Forward/Sparse-Dense, 00:18:42/00:00:00

(10.4.4.40, 224.0.1.40), 00:08:49/00:02:14, flags: LT

Incoming interface: Ethernet0/0, RPF nbr 172.16.23.2

Outgoing interface list:

Ethernet0/2, Forward/Sparse-Dense, 00:08:49/00:00:00

R3#

From the multicast routing table you can note that (*, 239.255.1.1) is the shared tree used by R3 to receive multicast traffic from the group 239.255.1.1, there is no (10.10.10.1,239.255.1.1) for the bidirectional group 239.255.1.1; all the remaining trees are related to the multicast group services 224.0.1.39 and 224.0.1.40.

R1:

R1#mtrace 10.10.10.1 192.168.0.2 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 172.16.23.3 PIM [10.10.10.0/24]
-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.24.4 PIM Reached RP/Core [10.10.10.0/24]

R1#

The same result from source and receiver PIM routers point of view, they both joined a shared tree that forward traffic in both direction upstream and downstream hence “Bidirectional” (figure 3).

Figure 3: Shared trees (RPT)

trees

R1#mrinfo
10.10.10.2 [version 12.3] [flags: PMSA]:
10.10.10.2 -> 0.0.0.0 [1/0/pim/querier/leaf]
172.16.12.1 -> 172.16.12.2 [1/0/pim]
R1#
R2#mrinfo
172.16.12.2 [version 12.3] [flags: PMSA]:
172.16.12.2 -> 172.16.12.1 [1/0/pim/querier]
172.16.24.2 -> 172.16.24.4 [1/0/pim]
172.16.23.2 -> 172.16.23.3 [1/0/pim]
R2#
R3#mrinfo
172.16.23.3 [version 12.3] [flags: PMSA]:
192.168.0.3 -> 0.0.0.0 [1/0/pim/querier/leaf]
172.16.23.3 -> 172.16.23.2 [1/0/pim/querier]
R3#

The command “mrinfo” is a good indicator of how PIM has localised PIM routers directly connected to either source or destination end points and forwarding routers.

Add unidirectional PIM-SM group multicast traffic along with Bidirectional PIM traffic

Any configuration statement about what group will use what PIM type have to be set on the RP:

Two distinct loopback interfaces should be configured for each PIM type, in our case: loopback0 for Bidirectional PIM that will route the multicast group traffic 239.255.1.1 and loopback2 for 239.255.1.2

ip pim send-rp-announce Loopback2 scope 32 group-list
UniPIM-Groups
ip pim send-rp-announce Loopback0 scope 32 group-list
BidirPIM-Groups
bidir
ip access-list standard UniPIM-Groups
permit 239.255.1.2
ip access-list standard BidirPIM-Groups
permit 239.255.1.1

And the same client connected to R3 will receive both groups 239.255.1.2 and 239.255.1.1

Let’s see how the RP (R4) handles the two type of PIM

R4(config)#do sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 00:40:57/00:02:50, RP 10.4.4.4, flags: B

Bidir-Upstream: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:40:38/00:02:50

(*, 239.255.1.2), 00:07:11/00:03:25, RP 10.4.4.144, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:01:03/00:03:25

(10.10.10.1, 239.255.1.2), 00:07:11/00:02:00, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.24.2

Outgoing interface list: Null

(*, 224.0.1.39), 01:08:28/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:09:40/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:08:28/00:00:00

Loopback1, Forward/Sparse-Dense, 01:08:48/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:08:48/00:00:00

Loopback0, Forward/Sparse-Dense, 01:08:48/00:00:00

(10.4.4.4, 224.0.1.39), 01:07:47/00:02:25, flags: LT

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:10:00/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:07:47/00:00:00

Loopback1, Forward/Sparse-Dense, 01:07:47/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:07:47/00:00:00

(10.4.4.144, 224.0.1.39), 00:09:53/00:02:07, flags: LT

Incoming interface: Loopback2, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:09:53/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 00:09:53/00:00:00

Loopback1, Forward/Sparse-Dense, 00:09:53/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 00:09:53/00:00:00

(*, 224.0.1.40), 01:08:52/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 01:08:38/00:00:00

Loopback1, Forward/Sparse-Dense, 01:08:56/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:08:58/00:00:00

(10.4.4.40, 224.0.1.40), 01:07:57/00:02:55, flags: LT

Incoming interface: Loopback1, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 01:07:57/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:07:57/00:00:00

R4(config)#

(*, 239.255.1.1) is the shared tree built by bidirectional PIM (flagged B) and routed at the RP itself (incoming interface = null), “Bidirectional” means that 239.255.1.1 traffic can be sent upstream to the RP and downstream from it in the same time.

(*, 239.255.1.2) is the shared tree built by unidirectional PIM Sparse mode (flagged S) and also routed at the RP (incoming interface = null), traffic 239.255.1.2 is sent only downstream from the RP toward receivers.

(10.10.10.1, 239.255.1.2), because 239.255.1.2 is routed by unidirectional PIM-SM, the RP register with the source PIM router and build a source-based tree SPT (S,G) through which the source send its multicast traffic ONLY downstream toward the RP. Note that the tree is flagged as “PT” which mean that PIM-SM has switched to SPT and traffic is no more forwarded through RP because there is a better path between the source and the destination.

R4#mtrace 10.10.10.1 192.168.0.2 239.255.1.2
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.2
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 172.16.23.3 PIM [10.10.10.0/24]
-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.12.1 PIM [10.10.10.0/24]

-4 10.10.10.1

R4#

Multicast traffic for the group 239.255.1.2 is originated from the source to the destination and forwarded by PIM-SM routers, which confirm that PIM-SM is source-based protocol.

R4#
R4#mtrace 10.10.10.1 192.168.0.2 239.255.1.1
Type escape sequence to abort.
Mtrace from 10.10.10.1 to 192.168.0.2 via group 239.255.1.1
From source (?) to destination (?)
Querying full reverse path…
0 192.168.0.2
-1 172.16.23.3 PIM [10.10.10.0/24]

-2 172.16.23.2 PIM [10.10.10.0/24]

-3 172.16.24.4 PIM Reached RP/Core [10.10.10.0/24]

R4#

Multicast traffic for the group 239.255.1.1 is routed at the source because it build only shared tree, this illustrate perfectly the mechanism of bidirectional PIM

Multicast another group 239.255.1.3 not assigned in both PIM access-lists

R4#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,
L – Local, P – Pruned, R – RP-bit set, F – Register flag,
T – SPT-bit set, J – Join SPT, M – MSDP created entry,
X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,
U – URD, I – Received Source Specific Host Report,
Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.1.1), 01:09:26/00:02:52, RP 10.4.4.4, flags: B

Bidir-Upstream: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 01:09:07/00:02:52

(*, 239.255.1.2), 00:35:41/00:02:31, RP 10.4.4.144, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:29:32/00:02:31

(10.10.10.1, 239.255.1.2), 00:35:41/00:01:14, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.24.2

Outgoing interface list: Null

(*, 239.255.1.3), 00:02:58/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/1, Forward/Sparse-Dense, 00:02:58/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 00:02:58/00:00:00

(10.10.10.1, 239.255.1.3), 00:02:58/00:00:01, flags: PT

Incoming interface: Ethernet0/0, RPF nbr 172.16.24.2

Outgoing interface list:

Ethernet0/1, Prune/Sparse-Dense, 00:02:58/00:00:01

(*, 224.0.1.39), 01:36:58/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:38:10/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:36:58/00:00:00

Loopback1, Forward/Sparse-Dense, 01:36:58/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:36:58/00:00:00

Loopback0, Forward/Sparse-Dense, 01:36:58/00:00:00

(10.4.4.4, 224.0.1.39), 01:35:57/00:02:16, flags: LT

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback2, Forward/Sparse-Dense, 00:38:10/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:35:57/00:00:00

Loopback1, Forward/Sparse-Dense, 01:35:57/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 01:35:57/00:00:00

(10.4.4.144, 224.0.1.39), 00:38:03/00:02:56, flags: LT

Incoming interface: Loopback2, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:38:03/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 00:38:03/00:00:00

Loopback1, Forward/Sparse-Dense, 00:38:03/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 00:38:03/00:00:00

(*, 224.0.1.40), 01:37:03/00:02:56, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 01:36:40/00:00:00

Loopback1, Forward/Sparse-Dense, 01:36:58/00:00:00

Ethernet0/1, Forward/Sparse-Dense, 01:37:01/00:00:00

R4#

Note that (*, 239.255.1.3) is flagged as “D” this mean that if a group not assigned in both PIM-SM and Bidir-SM group lists it uses Dense mode.

Conclusion

– PIM-Bidir is more efficient and more scalable than PIM-SM for large number of multicast sources.

– The DF (Dessignated Forwarder) is elected on each segment or point-to-point link to establish loop-free STTP with RP as the root.

– (S,G) join messages are dropped on routers that supports only Bidir-PIM.

– RP never join back a path to the source (no (S,G) tree) nor send any register-stop.

– PIM-Bidir is capable of sending and receiving multicast traffic for the same group along the same shared-tree.