PIM-SM, Auto-RP protocol


OVERVIEW

The two methods available for dynamically finding RP in a multicast network are auto-RP and BSR (Boot-Strap Router), in this lab we will focus our attention on the Cisco proprietary protocol PIM-SM auto-RP, also we will see how a certain “Dense mode” is still haunting us : )

As we have seen in a previous post (PIM-SM static RP) manually configuring a Rendez-Vous Point RP is very simple and straightforward for small networks, however in relatively big networks static RP is a burdensome procedure and prone to errors.

With static RP configuration, each PIM-SM router connects to the preconfigured RP to join the multicast group 224.0.1.40 by building a shared tree (RPT) with it.

The obvious idea about auto-RP is to make PIM-SM routers learn dynamically the RP address, the issue is presented as follow:

PIM-SM routers have to join the multicast group 224.0.1.40 by building a shared tree (RPT) with the RP to be able to learn the address of the RP ??!???

a) One solution is to statically configure an initial RP to let PIM-SM routers join the group 224.0.1.40 and may be learn other RPs that could override the statically configured one; again a manual intervention that break the concept and the logic behind “dynamic” learning, and what if the first (initial) RP fails??

ip pim rp-address <X.X.X.X.> group-list <acl_nbr> {override}

access-list < acl_nbr > permit <Y.Y.Y.Y>

access-list < acl_nbr > deny all

The group-list serves as a protection mechanism against rogue RPs.

b) Finally the origin of this issue is how to join the first two multicast groups 224.0.1.39 and 224.0.1.40, so what if we use PIM-DM for that purpose? Here comes the concept of PIM-SPARSE-DENSE mode: the multicast group 224.0.1.39 and 224.0.1.40 are flooded to the network because all PIM-SM routers need to join them (this is consistent with Dense mode concept) with PIM-DM and then all other groups will used the dynamically learned RP.

With this particular PIM mode it is possible to configure from the RP which groups will use sparse mode and which groups will use dense mode.

The post is organized as follow:

CONFIGURATION

1) CONFIGURATION ANALYSIS

a) no multicast traffic in the network

b) With multicast traffic

2) RP REDUNDANCY

a) Primary RP failure

b) Back from failure

3- Group and RP filtering policy

  1. RP-based filtering

 

CONFIGURATION

Figure1 illustrate the topology used in this lab.

Figure1 topology

A best practice is to enable PIM sparse-dense on all interfaces that susceptible to forward multicast traffic.

R1:

ip multicast-routing

int e0/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int s1/0

ip pim sparse-dense

R2:

ip multicast-routing

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int s1/2

ip pim sparse-dense 

R3:

ip multicast-routing

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int loo0

ip pim sparse-dense 

R5:

ip multicast-routing

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense

int s1/2

ip pim sparse-dense

int loo0

ip pim sparse-dense 

R4:

ip multicast-routing

int e0/0

ip pim sparse-dense

int s1/0

ip pim sparse-dense

int s1/1

ip pim sparse-dense 

R5:(primary RP – 10.5.5.5)

ip pim send-rp-announce Loopback0 scope 255 

R3:(Secondary RP – 10.3.3.3)

ip pim send-rp-announce Loopback0 scope 255 

R2:(mapping agent)

ip pim send-rp-discovery Loopback0 scope 255 

 

 

1) CONFIGURATION ANALYSIS

a) no multicast traffic in the network

R5

R5(config-if)#

*Mar 1 03:46:01.407: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 03:46:01.407: Auto-RP(0): Send RP-Announce packet on Serial1/0

*Mar 1 03:46:01.411: Auto-RP(0): Send RP-Announce packet on Serial1/1

*Mar 1 03:46:01.415: Auto-RP(0): Send RP-Announce packet on Serial1/2

*Mar 1 03:46:01.419: Auto-RP: Send RP-Announce packet on Loopback0

R5(config-if)#do un all 

R5 is announcing itself to the mapping agents on 224.0.1.39

R2(config)#

*Mar 1 03:46:02.587: Auto-RP(0): Build RP-Discovery packet

*Mar 1 03:46:02.587: Auto-RP: Build mapping (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1,

*Mar 1 03:46:02.591: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

*Mar 1 03:46:02.595: Auto-RP(0): Send RP-discovery packet on Serial1/1 (1 RP entries)

*Mar 1 03:46:02.599: Auto-RP(0): Send RP-discovery packet on Serial1/2 (1 RP entries)

*Mar 1 03:46:02.603: Auto-RP: Send RP-discovery packet on Loopback0 (1 RP entries)

*Mar 1 03:46:18.471: Auto-RP(0): Received RP-announce, from 10.5.5.5, RP_cnt 1, ht 181

*Mar 1 03:46:18.475: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 03:46:18.479: Auto-RP(0): Received RP-announce, from 10.5.5.5, RP_cnt 1, ht 181

*Mar 1 03:46:18.483: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 03:46:33.235: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 03:46:33.239: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 03:46:33.243: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 03:46:33.247: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

R2(config)#do un all

R2 the mapping agent is receiving RP-announces from both RPs R5 and R3. Updated its records and send them to all 224.0.1.40.

R2(config)#do sh ip pim rp ma

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1

Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:07:18, expires: 00:02:51


RP 10.3.3.3 (?), v2v1

Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 00:05:50, expires: 00:02:09

R2(config)# 

R2 has both records about R5 and R3, but selected R5 as the network RP because of its higher IP address.

R1

R1(config)#

*Mar 1 03:46:03.683: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 03:46:03.687: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R1(config)#do un all 

R1 the source PIM-SM has already joined the multicast group 224.0.1.40, thereby it is now able to receive RP-Discovery from the mapping agent R2 and it receive only the elected RP R5 (10.5.5.5).

R4

R4(config)#

*Mar 1 03:45:46.459: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 03:45:46.463: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R4(config)# 

The same for R4 the PIM-DR (directly connected to multicast client).

b) With multicast traffic

The multicast source connected to R1 start diffusing multicast group traffic 239.255.1.1 and the client connected to R4 is receiving it.

R1(config)#do sh ip pim rp

Group: 239.255.1.1, RP: 10.5.5.5, v2, uptime 00:17:25, expires 00:02:26

R1(config)# 

 

R4(config)#do sh ip pim rp

Group: 239.255.1.1, RP: 10.5.5.5, v2, v1, uptime 00:16:43, expires 00:02:09

R4(config)# 

Both R1 and R4 now has the correct information about the RP and the group it serves 239.255.1.1.

R1 has registered itself with the RP through a source path tree SPT and R4 has joined the shared tree RPT with RP and received the first multicast packet from forwarded by RP.

Now R4 can build a direct SPT as the path through RP is not the optimal and start receiving multicast traffic directly from R1.

Let’s analyze the multicast routing table on R4 and R5 (RP) that can tell us a lot about the topology:

R4#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:03:33/stopped, RP 10.5.5.5, flags: SC

Incoming interface:
Serial1/1, RPF nbr 192.168.54.5

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:03:33/00:02:42

 

(10.10.10.1, 239.255.1.1), 00:03:33/00:02:57, flags: T

Incoming interface: Serial1/0, RPF nbr 192.168.14.1

Outgoing interface list:

Ethernet0/0, Forward/Sparse-Dense, 00:03:33/00:02:42

 

(*, 224.0.1.39), 04:05:44/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/1, Forward/Sparse-Dense, 04:05:44/00:00:00

Serial1/0, Forward/Sparse-Dense, 04:05:44/00:00:00

 

(10.5.5.5, 224.0.1.39), 00:01:49/00:01:18, flags: PT

Incoming interface: Serial1/1, RPF nbr 192.168.54.5

Outgoing interface list:

Serial1/0, Prune/Sparse-Dense, 00:02:14/00:00:45

 

(*, 224.0.1.40), 04:06:11/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/1, Forward/Sparse-Dense, 04:06:10/00:00:00

Serial1/0, Forward/Sparse-Dense, 04:06:10/00:00:00

Ethernet0/0, Forward/Sparse-Dense, 04:06:11/00:00:00

 

(10.2.2.2, 224.0.1.40), 04:05:29/00:02:24, flags: LT

Incoming interface: Serial1/0, RPF nbr 192.168.14.1

Outgoing interface list:

Serial1/1, Prune/Sparse-Dense, 00:00:40/00:02:22, A

Ethernet0/0, Forward/Sparse-Dense, 04:05:29/00:00:00

 

R4# 

– (10.10.10.1, 239.255.1.1) : PIM-SM has switched to the SPT, shortest path tree, (T:SPT-bit set) between the multicast source router 10.10.10.1 and R4 with s1/0 as the incoming interface and Fa0/0 as the ougoing interface

– (*, 224.0.1.39) and (*, 224.0.1.40) are flagged with “D”, it means that these two groups has been joined using “Dense mode” for R4 to be able to learn dynamically about the RP using 224.0.1.40.

– (*, 239.255.1.1) This is the RPT, the shared tree used to join the RD in the first phase before switching to SPT, hence the “stopped” timer.

– (10.2.2.2, 224.0.1.40) this SPT is used between R4 and R2 (mapping agent) to forward 224.0.1.40 group multicast traffic that carries the information about the RP.

R4#sh ip pim rp

Group: 239.255.1.1, RP: 10.5.5.5, v2, v1, uptime 04:10:28, expires 00:02:17

R4# 

on R5 (RP):

R5(config-router)#do sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:46:31/00:03:29, RP 10.5.5.5, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/1, Forward/Sparse-Dense, 00:46:21/00:03:29

 

(10.10.10.1, 239.255.1.1), 00:46:31/00:02:35, flags: PT

Incoming interface: Serial1/1, RPF nbr 192.168.54.4

Outgoing interface list: Null

R5(config-router)# 

– (*, 239.255.1.1) this is the shared tree between the RP and all PIM-SM routers that wants to join the specific group 239.255.1.1.

– (10.10.10.1, 239.255.1.1) is the SPT built between R1 and R5.

 

2) RP REDUNDANCY

a) Primary RP failure

To simulate RP failure in this particular topology R5 loopback interface serving as RP IP address is shut down, so R5 will no more act as RP but R5 still forward the traffic.

Let’s analyze the debug outputs from various routers in the network:

R2 (the mapping agent):

Before primary RP (R5) failure:

R2(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1


Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 05:42:21, expires: 00:01:39


RP 10.3.3.3 (?), v2v1


Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 05:40:53, expires: 00:02:47

R2(config)# 

After shutting down R5 loopback0:

*Mar 1 09:25:14.685: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 09:25:14.689: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 09:25:14.693: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 09:25:14.697: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 09:26:07.821: Auto-RP(0): Mapping (224.0.0.0/4, RP:10.5.5.5) expired,

*Mar 1 09:26:07.857: Auto-RP(0): Build RP-Discovery packet

*Mar 1 09:26:07.857: Auto-RP: Build mapping (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1,

*Mar 1 09:26:07.861: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

*Mar 1 09:26:07.865: Auto-RP(0): Send RP-discovery packet on Serial1/1 (1 RP entries)

*Mar 1 09:26:07.869: Auto-RP(0): Send RP-discovery packet on Serial1/2 (1 RP entries)

R2 received R3 RP-announce messages, but haven’t not received R5 RP-announces for 3 x (announce interval) = 180 seconds, so mapping information about the group 239.255.1.1 is expired, consequently RP R3 is selected as the RP for the group and forwarded to all PIM-SM through RP-Discovery.

R1:

R1(config)#

*Mar 1 09:27:09.157: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:27:09.161: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

R1(config)#

 

R1(config)#do sh ip pim rp

Group: 239.255.1.1, RP: 10.3.3.3, v2, uptime 00:15:45, expires 00:02:06

R1(config)# 

R1 has received the RP-Discovery and updated RP information

R4:

R4#

*Mar 1 09:27:01.177: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:27:01.181: Auto-RP(0): Update (224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

R4# 

 

R4#sh ip pim rp

Group: 239.255.1.1, RP: 10.3.3.3, v2, v1, uptime 00:16:58, expires 00:02:56

R4# 

The same for R4.

b) Back from failure

R2:

*Mar 1 09:51:04.429: Auto-RP(0): Received RP-announce, from 10.5.5.5, RP_cnt 1, ht 181

*Mar 1 09:51:04.437: Auto-RP(0): Added with (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 09:51:04.465: Auto-RP(0): Build RP-Discovery packet

*Mar 1 09:51:04.469: Auto-RP: Build mapping (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1,

*Mar 1 09:51:04.473: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

*Mar 1 09:51:04.477: Auto-RP(0): Send RP-discovery packet on Serial1/1 (1 RP entries)

*Mar 1 09:51:04.481: Auto-RP(0): Send RP-discovery packet on Serial1/2 (1 RP entries)

R2(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1


Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:00:53, expires: 00:02:02


RP 10.3.3.3 (?), v2v1


Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 06:08:25, expires: 00:02:15

R2(config)# 

The mapping agent is receiving again RP-announce from R5 (highest IP that actual RP address of R3) , so select it and send through RP-Discovery to all PIM-SM.

R1:

R1(config)#

*Mar 1 09:57:01.997: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:57:02.001: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R1(config)# 

 

R1(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

 

Group(s) 224.0.0.0/4


RP 10.5.5.5 (?), v2v1


Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:06:58, expires: 00:02:55

R1(config)# 

R4:

R4#

*Mar 1 09:59:53.349: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 1, ht 181

*Mar 1 09:59:53.353: Auto-RP(0): Update (224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

R4#

R4#

R4# sh ip pim rp mapp

PIM Group-to-RP Mappings

 

Group(s) 224.0.0.0/4

RP 10.5.5.5 (?), v2v1


Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:09:04, expires: 00:02:46

R4# 

R1 and R4 updates their group-to-RP mapping information.

 

3- Group and RP filtering policy

a) RP-based filtering

Each RP will be responsible for a particular group, R5 for 239.255.1.1 and R3 for 239.255.1.2

R3:

ip pim send-rp-announce Loopback0 scope 255 group-list RP_GROUP_ACL

!

!

ip access-list standard RP_GROUP_ACL

permit 239.255.1.2

deny any

R5:

ip pim send-rp-announce Loopback0 scope 255 group-list RP_GROUP_ACL

!

!

ip access-list standard RP_GROUP_ACL

permit 239.255.1.1

deny any

R2 (mapping agent):

*Mar 1 10:15:29.565: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 10:15:29.569: Auto-RP(0): Update (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:29.573: Auto-RP(0): Update (-224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:29.577: Auto-RP(0): Received RP-announce, from 10.3.3.3, RP_cnt 1, ht 181

*Mar 1 10:15:29.581: Auto-RP(0): Update (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:29.585: Auto-RP(0): Update (-224.0.0.0/4, RP:10.3.3.3), PIMv2 v1

*Mar 1 10:15:30.269: Auto-RP(0): Build RP-Discovery packet

*Mar 1 10:15:30.269: Auto-RP: Build mapping (-224.0.0.0/4, RP:10.5.5.5), PIMv2 v1,

*Mar 1 10:15:30.273: Auto-RP: Build mapping (239.255.1.1/32, RP:10.5.5.5), PIMv2 v1.

*Mar 1 10:15:30.277: Auto-RP: Build mapping (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1.

*Mar 1 10:15:30.285: Auto-RP(0): Send RP-discovery packet on Serial1/0 (2 RP entries)

*Mar 1 10:15:30.289: Auto-RP(0): Send RP-discovery packet on Serial1/1 (2 RP entries)

*Mar 1 10:15:30.293: Auto-RP(0): Send RP-discovery packet on Serial1/2 (2 RP entries)

R2 has received the RP-announce (on 224.0.1.39) from both RPs with the relevant groups, (-224.0.0.0/4) means that the RP is blocking all the remaining multicast traffic.

After populating it group-to-rp mapping information, R2 send it to all PIM-SM routers on 224.0.1.40

R2(config)#do sh ip pim rp mapp

PIM Group-to-RP Mappings

This system is an RP-mapping agent (Loopback0)

 

Group(s) (-)224.0.0.0/4


RP 10.5.5.5 (?), v2v1

Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:28:22, expires: 00:02:03

RP 10.3.3.3 (?), v2v1

Info source: 10.3.3.3 (?), via Auto-RP

Uptime: 06:35:53, expires: 00:02:01

Group(s) 239.255.1.1/32


RP 10.5.5.5 (?), v2v1

Info source: 10.5.5.5 (?), elected via Auto-RP

Uptime: 00:05:55, expires: 00:02:02

Group(s) 239.255.1.2/32


RP 10.3.3.3 (?), v2v1

Info source: 10.3.3.3 (?), elected via Auto-RP

Uptime: 00:05:56, expires: 00:01:58

R2(config)#

R4:

R4#

*Mar 1 10:16:22.505: Auto-RP(0): Received RP-discovery, from 10.2.2.2, RP_cnt 2, ht 181

*Mar 1 10:16:22.509: Auto-RP(0): Update (-224.0.0.0/4, RP:10.5.5.5), PIMv2 v1

*Mar 1 10:16:22.513: Auto-RP(0): Update (239.255.1.1/32, RP:10.5.5.5), PIMv2 v1

*Mar 1 10:16:22.517: Auto-RP(0): Update (239.255.1.2/32, RP:10.3.3.3), PIMv2 v1

R4# 

R4 has received RP-Discovery messages from R2 the mapping agent.

R4# sh ip pim rp mapp

PIM Group-to-RP Mappings

 

Group(s) (-)224.0.0.0/4


RP 10.5.5.5 (?), v2v1

Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:28:52, expires: 00:02:29

Group(s) 239.255.1.1/32


RP 10.5.5.5 (?), v2v1

Info source: 10.2.2.2 (?), elec ted via Auto-RP

Uptime: 00:06:25, expires: 00:02:27

Group(s) 239.255.1.2/32


RP 10.3.3.3 (?), v2v1

Info source: 10.2.2.2 (?), elected via Auto-RP

Uptime: 00:06:26, expires: 00:02:29

R4# 

Now R4 has updated it group-to-rp mapping information as configured from the RP’s:

R5 will be responsible for the multicast group 239.255.1.1

R3 will be responsible for the multicast group 239.255.1.2

and any other multicast traffic will be dropped.

CONCLUSION

PIM Sparse-dense mode use Dense mode to multicast groups 224.0.1.39 and 224.0.1.40 traffic so PIM-SM routers can join them and dynamically learn about RPs to join other multicast groups this time using PIM-SM mechanism.

Advertisements

About ajnouri
Se vi deziras sekure komuniki eksterbloge, jen mia publika (GPG) ŝlosilo: My public key for secure communication: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x41CCDE1511DF0EB8

3 Responses to PIM-SM, Auto-RP protocol

  1. corduroy says:

    quite good explanation. Thaks for sharing it.

  2. Satish says:

    Thanks a lot for sharing such a nice post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: