Multicast over FR NBMA part3 – (PIM-NBMA mode and Auto-RP)


In this third part of the document “Multicast over FR NBMA” we will see how with an RP in one spoke network and MA in the central site we can defeat the issue of Dense mode and forwarding service groups 224.0.1.39 and 224.0.1.40 from one spoke to another.

Such placement of the MA in the central site as a proxy, is ideal to insure the communication between RP and all spokes through separated PVCs.

In this LAB the RP is configured in SpokeBnet (SpokeB site) and the mapping agent in Hubnet (central site).

Figure1: Lab topology


CONFIGURATION

!! All interface PIM mode on all routers are set to “sparse-dense”

ip pim sparse-dense-mode

Configure SpokeBnet to become an RP

pokeBnet(config)#ip pim send-rp-announce Loopback0 scope 32

Configure Hubnet to become a mapping agent

Hubnet(config)#ip pim send-rp-discovery loo0 scope 32

And the result across FR is as follow:

SpokeBnet:

*Mar 1 01:02:03.083: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181

*Mar 1 01:02:03.087: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:02:03.091: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:02:03.091: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

The RP announce itself to all mapping agents in the network (the multicast group 224.0.1.39)

HUBnet:

Hubnet(config)#

*Mar 1 01:01:01.487: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181

*Mar 1 01:01:01.491: Auto-RP(0): Added with (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

*Mar 1 01:01:01.523: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:01:01.523: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:01:01.527: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:01:01.535: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

*Mar 1 01:01:01.539: Auto-RP: Send RP-discovery packet on Loopback0 (1 RP entries)

Hubnet(config)#

The mapping agent has received RP-announces from RP, update its records and send the Group-to-RP information (in this case: RP is responsible for all multicast groups) to the destination multicast 224.0.1.40 (all PIM routers)

SpokeA#

*Mar 1 01:01:53.379: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181

*Mar 1 01:01:53.383: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

SpokeA#

 

SpokeA#sh ip pim rp

Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 00:18:18, expires 00:02:32

SpokeA#

As an example SpokeA across the HUB and FR cloud has received the Group-to-RP mapping information and updates its records.

SpokeBnet#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:35:35/00:03:25, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:06:59/00:02:39

FastEthernet0/0, Forward/Sparse-Dense, 00:35:35/00:03:25

 

(10.10.10.1, 239.255.1.1), 00:03:43/00:02:47, flags: T

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:03:43/00:02:39

 

(*, 224.0.1.39), 00:41:36/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 00:41:36/00:00:00

 

(192.168.38.1, 224.0.1.39), 00:41:36/00:02:23, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 00:41:37/00:00:00

 

(*, 224.0.1.40), 01:03:55/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:03:50/00:00:00

FastEthernet1/0, Forward/Sparse-Dense, 01:03:55/00:00:00

 

(10.0.0.1, 224.0.1.40), 00:35:36/00:02:07, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:35:36/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – This is the shared tree built between the receiver and rooted at the RP (incoming interface=null), the flag “J” indicates that the traffic was switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – is the SPT in question receiving traffic through Fa0/0 and forwarding it to Fa1/0.

Both (*, 224.0.1.39) and (*.224.0.1.40) are flagged with “D” which means that they were multicasted using dense mode, this is the preliminary operation of auto-RP.

SpokeBnet#mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM Reached RP/Core [10.10.10.0/24]

-2 192.168.39.1 PIM [10.10.10.0/24]

-3 192.168.100.1 PIM [10.10.10.0/24]

-4 10.10.20.3 PIM [10.10.10.0/24]

SpokeBnet#

ClientB is receiving the multicast traffic as needed.

The following outputs show how SpokeA networks also receive perfectly the multicast traffic 239.255.1.1:

SpokeA# mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

-3 10.10.20.3 PIM [10.10.10.0/24]

-4 10.10.10.1

SpokeA#

 

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:59:33/stopped, RP 192.168.38.1, flags: SJCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:59:33/00:02:01

 

(10.10.10.1, 239.255.1.1), 00:03:23/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:03:23/00:02:01

 

(*, 224.0.1.40), 00:59:33/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial0/0.201, Forward/Sparse-Dense, 00:59:33/00:00:00

 

(10.0.0.1, 224.0.1.40), 00:44:18/00:02:15, flags: PLTX

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list: Null

 

SpokeA#

One more time Auto-RP in action: PIM has switched from RPT (*, 239.255.1.1) – “J” to SPT (10.10.10.1, 239.255.1.1) – “TJ”

Advertisements

Multicast over FR NBMA part2 – (PIM NBMA mode and static RP)


This is the second part of the document “multicast over FR NBMA”, this lab focus on the deployment of PIM NBMA mode in a HUB and Spoke FR NBMA network with static Rendez-vous Point configuration.

Figure1 illustrates the Lab topology used, the HUB is connected to the FR NBMA through the main physical interface, SpokeA and SpokeB are connected through multipoint sub-interface, the FR NBMA is a partial mesh topology with two PVCs from each spoke to the HUB main physical interface.

Figure1 : Lab topology

I – SCENARIO I

PIM-Sparse mode is used with Static RP role played by the HUB.

1 – CONFIGURATION

HUB:

Frame Relay configuration:

interface Serial0/0

ip address 192.168.100.1 255.255.255.0

encapsulation frame-relay

!! Inverse ARP is disabled

no frame-relay inverse-arp

!! OSPF protocol have to be consistent with the Frame Relay !!topology, in this case it is HUB & Spoke in which traffic destined !!to spokes will be forwarded to the HUB then from there to the !!spoke.


ip ospf network point-to-multipoint

!! With inverse ARP disabled you have to configure FR mapping and do !!not forget to enable “pseudo broadcasting” at the end of frame !!relay maps

frame-relay map ip 192.168.100.2 101 broadcast

frame-relay map ip 192.168.100.3 103 broadcast

Multicast configuration:

interface Loopback0

ip address 192.168.101.1 255.255.255.255

!! Enable Sparse mode on all interfaces

ip pim sparse-mode

interface Serial0/0

!! Enable PIM NBMA mode


ip pim nbma-mode

ip pim sparse-mode

!! Enable multicast routing, the IOS will alert you if you are !!trying to use multicast commands without if you forgot it

ip multicast-routing

ip pim rp-address 192.168.101.1

Routing configuration:

router ospf 10

network 10.10.10.0 0.0.0.255 area 100


network 192.168.100.0 0.0.0.255 area 0

SpokeA:

Frame Relay configuration:

interface Serial0/0

no ip address

encapsulation frame-relay

no frame-relay inverse-arp

interface Serial0/0.201 multipoint

ip address 192.168.100.2 255.255.255.0

frame-relay map ip 192.168.100.1 201 broadcast 

Multicast configuration:

interface Serial0/0.201 multipoint

ip pim nbma-mode

ip pim sparse-mode

ip pim rp-address 192.168.101.1

ip multicast-routing 

Routing protocol configuration:

router ospf 10

network 20.20.20.0 0.0.0.255 area 200

network 192.168.100.0 0.0.0.255 area 0

interface Serial0/0.201 multipoint

ip ospf network point-to-multipoint 

SpokeB:

Frame Relay configuration:

interface Serial0/0

no ip address

encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

interface Serial0/0.301 multipoint

ip address 192.168.100.3 255.255.255.0

frame-relay map ip 192.168.100.1 301 broadcast 

Multicast configuration:

ip pim rp-address 192.168.101.1

ip multicast-routing

interface Serial0/0.301 multipoint

ip pim sparse-mode

ip pim nbma-mode 

Routing Protocol configuration:

router ospf 10

network 192.168.40.0 0.0.0.255 area 300

network 192.168.100.0 0.0.0.255 area 0

interface Serial0/0.301 multipoint

ip ospf network point-to-multipoint

 

2 – ANALYSIS

SpokeB:

SpokeB#sh ip route

Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP

D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is not set

 

20.0.0.0/32 is subnetted, 1 subnets

O IA 20.20.20.20 [110/129] via 192.168.100.1, 00:01:16, Serial0/0.301

C 192.168.40.0/24 is directly connected, FastEthernet1/0

10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks

O IA 10.0.0.2/32 [110/66] via 192.168.100.1, 00:01:16, Serial0/0.301

O IA 10.10.10.0/24 [110/65] via 192.168.100.1, 00:01:16, Serial0/0.301

O IA 10.0.0.1/32 [110/66] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.102.0/32 is subnetted, 1 subnets

O 192.168.102.1 [110/65] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.100.0/24 is variably subnetted, 3 subnets, 2 masks

C 192.168.100.0/24 is directly connected, Serial0/0.301

O 192.168.100.1/32 [110/64] via 192.168.100.1, 00:01:16, Serial0/0.301

O 192.168.100.2/32 [110/128] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.101.0/32 is subnetted, 1 subnets

O 192.168.101.1 [110/65] via 192.168.100.1, 00:01:17, Serial0/0.301

SpokeB#

The connectivity is successful and SpokeB has received correct routing information about the network

SpokeB#sh ip pim rp

Group: 239.255.1.1, RP: 192.168.101.1, uptime 00:17:31, expires
never

Group: 224.0.1.40, RP: 192.168.101.1, uptime 00:17:31, expires never

SpokeB# 

Because we use PIM-SM with static RP, the group-to-RP mapping will never expire

SpokeB#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:46:09/stopped, RP 192.168.101.1, flags: SJC

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse, 00:46:09/00:02:38

 

(10.10.10.1, 239.255.1.1), 00:22:22/00:02:50, flags: JT

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse, 00:22:22/00:02:38

 

(*, 224.0.1.40), 00:18:10/00:02:24, RP 192.168.101.1, flags: SJCL

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

Serial0/0.301, 192.168.100.3, Forward/Sparse, 00:14:36/00:00:21

 

SpokeB# 

 

SpokeB#mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM Reached RP/Core [10.10.10.0/24]

SpokeB# 

According to the two previous outputs, The spokeB (PIM-DR) has joined the shared (RPT) tree rooted at the RP/Core (the HUB) and the traffic across this shared tree is entering the FR interface and forwarded to the LAN interface Fa1/0.

Because it is a static RP configuration there is only one multicast group service 224.0.1.40 through which a shared tree is built for PIM-SM routers to communicate with the RP.

HUB:

HUB#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:48:20/00:03:22, RP 192.168.101.1, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.3, Forward/Sparse, 00:48:20/00:03:22


Serial0/0, 192.168.100.2, Forward/Sparse, 00:48:20/00:03:09

 

(10.10.10.1, 239.255.1.1), 00:25:33/00:03:26, flags: T

Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:25:33/00:03:08


Serial0/0, 192.168.100.3, Forward/Sparse, 00:25:33/00:03:22

 

(10.10.10.3, 239.255.1.1), 00:00:22/00:02:37, flags: T

Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:00:22/00:03:08


Serial0/0, 192.168.100.3, Forward/Sparse, 00:00:22/00:03:22

 

(192.168.100.1, 239.255.1.1), 00:00:23/00:02:36, flags:

Incoming interface: Serial0/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:00:23/00:03:07


Serial0/0, 192.168.100.3, Forward/Sparse, 00:00:23/00:03:20

 

(*, 224.0.1.40), 00:32:02/00:03:09, RP 192.168.101.1, flags: SJCL

Incoming interface: Null, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:32:00/00:03:09


Serial0/0, 192.168.100.3, Forward/Sparse, 00:32:02/00:02:58

 

HUB# 

Note that for all RPT and SPT the HUB consider two “next-hops” out of the same Serial0/0 interface, this is the work of PIM NBMA mode that makes the layer3 PIM aware of the real layer 2 topology which is not Broadcast but separated “point-to-point PVCs”, as against using only pseudo broadcast that will give the layer 3 an illusion of multi-access network.

Because the shared tree (*,239.255.1.1) is originated locally, the incoming interface list is null.

A source routed tree (10.10.10.1, 239.255.1.1) is also built between the source and the RP.

SpokeA:

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:07:17/stopped, RP 192.168.101.1, flags: SJCLF

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:07:17/00:02:39

 

(10.10.10.1, 239.255.1.1), 00:43:08/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:43:08/00:02:39

 

(*, 224.0.1.40), 00:38:39/00:02:42, RP 192.168.101.1, flags: SJCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:38:39/00:02:42

 

SpokeA# 

 

SpokeA#mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM Reached RP/Core [10.10.10.0/24]

SpokeA# 

SpokeA has received the first packet through the RP, then switched to SPT (10.10.10.1, 239.255.1.1).

 

II – Scenario II

In this scenario the SpokeB play the role of the RP.

1 – CONFIGURATION

Multicast configuration:

The new static RP should be configured on all routers, 192.168.103.1 is a SpokeB loopback interface and it is advertised and reached from everywhere.

SpokeB:

ip pim rp-address 192.168.103.1

 

SpokeB(config)#do mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM Reached RP/Core [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

SpokeB(config)# 

Now the multicast traffic received by SpokeB client is forwarded through the new RP (SpokeB) which is in the best path from the source to the receiver.

SpokeA:

SpokeA#mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

-3 10.10.10.1

SpokeA# 

 

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:24:27/stopped, RP 192.168.103.1, flags: SJCLF

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback0, Forward/Sparse, 01:24:27/00:02:29

 

(10.10.10.1, 239.255.1.1), 01:00:18/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:00:18/00:02:29

 

(*, 224.0.1.40), 00:06:18/00:02:37, RP 192.168.103.1, flags: SJCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial0/0.201, 192.168.100.1, Forward/Sparse, 00:06:02/00:00:37

Loopback0, Forward/Sparse, 00:06:18/00:02:37

 

SpokeA# 

It seems that SpokeA has switched from RPT to SPT (10.10.10.1,239.255.1.1) (SJCLF: Sparse mode, join SPT, Register Flag and locally connected requester)

By the way let’s see what will happen if we disable the SPT switchover by providing the following command at SpokeA:

ip pim spt-threshold infinity

 

SpokeA(config)#do sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:44:47/00:02:59, RP 192.168.103.1, flags: SCLF

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:44:47/00:02:17

 

(*, 224.0.1.40), 00:26:39/00:02:11, RP 192.168.103.1, flags: SCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:26:39/00:02:11

 

SpokeA(config)# 

No more SPT! only shared tree (*.239.255.1.1) rooted at the RP (spokeB)

Multicast over FR NBMA part1 – (pseudo broadcast, PIM NBMA mode, mGRE and DMVPN)


This lab focus on Frame Relay NBMA, Non-Broadcast Multiple Access… the confusion already begin from the title : ) a multiple access network shouldn’t support broadcast?

In this first part I will try to describe the problematic forwarding of multicast over FR NBMA as well as different solutions to remediate to the problem, subsequent parts will focus on the deployment of those solution.

Well, the issue is a result of the difference between how layer2 operates and how layer 3 consider the result of layer2 functioning.

Let’s begin by the layer2, in the case of Frame Relay it is composed of Permanent Virtual Circuits each one is delimited by a physical interface in the both ends and a local Data Link Connection identifier (DLCI)mapped to each end; so there is no multi-access network at all at the layer 2, just the fact that multiple PVCs ends with a single physical interface or sub-interface gives the illusion that it is a multiple access network (figure1).

Figure1 : layer2 and layer3 views of the network

With such configuration of multipoint interfaces or sub-interfaces a single subnet is used at the layer 3 for all participant of the NBMA which makes the layer think that it is a multiple access network.

Let’s precise that we are talking here about partial meshed NBMA.

All this misunderstanding between layer2 and layer3 about the nature of the medium makes that broadcast and multicast doesn’t work.

Many solutions are proposed:

1 – “Pseudo broadcast” – IOS feature that emulates the work of multicast.

Figure2 : Interface HW queues

In fact a physical interface has two hardware queues (figure2) one for unicast and one (strict priority) for broadcast traffic, and because the layer 3 think that it is a multi-access network it will send only one packet over the interface and hope all participants in the other part will receive that packet, which will not happen of course.

Do you remember the word “broadcast” added at the end of a static frame relay mapping?

Frame-relay map ip X.X.X.X <DLCI> broadcast

This activate the feature of pseudo broadcasting, the router before sending a broadcast packet, will make “n” copies and send them to the “n” spokes attached to the NBMA whether they requested it or not. Finally such mechanism is against the concept of multicasting, therefore pseudo broadcast can lead to serious issues of performance in large multicast networks.

Another issue with pseudo broadcast in an NBMA HUB & spoke topology is when one spoke forward multicast data or control traffic that other spokes are supposed to receive, particularly PIM prune and override messages.

The concept is based on the fact that if one downstream PIM router send a prune message to the upstream router, other downstream routers connected to the same multi-access network will receive the prune and those who still want to receive the multicast traffic will send a prune override (join) message to the upstream router.

As we mentioned before the HUB is capable of emulating multicast by making multiple copies of the packet, but only for traffic forwarded from HUB internal interfaces to the NBMA interface or just generated at the HUB router, but not if traffic is received from the NBMA interface.

Conclusion è do not use PIM-DM nor PIM-SPARSE-DENSE (auto-RP) mode with pseudo broadcasting, however you can use PIM-SM with static RP.


2 – PIM NBMA mode
– makes more clear relationship between layer2 and layer3, NBMA mode will tell “the truth” to the layer 3: that the medium is in reality not a multi-access but a partial meshed NBMA so PIM can act accordingly.

The router now will track IP addresses of neighbors from which it received a join message and make an outgoing interface list of such neighbors for RPT (*,G) and SPT (S,G).

PIM NBMA mode still do not support DENSE mode nevertheless it is possible to use Auto-RP with PIM-SPARSE-DENSE mode ONLY if you make sure that the mapping agent is located on the HUB network so it can communicate with all spokes.

3 – Point-to-point sub-interfaces – If you want to avoid all previously mentioned complications with multipoint interfaces in an NBMA HUB & Spoke, you can choose to deploy point-to-point sub-interfaces, spokes will be treated as they are connected to the HUB through separated physical interfaces with separated IP subnets and the layer 2 is no more considered multi-access.

4 – GRE Tunneling – I would like to mention another alternative by keeping the NBMA multipoint FR interface with all that can be considered as advantages such saving IP addresses, and just make it transparent by deploying tunnel technology multipoint-GRE.

With a multipoint GRE logical topology over the layer3 in which multicast will be encapsulated inside GRE packets and forwarded as unicast and decapsulated at the layer3 on the other side of the tunnel without dealing with layer2.

What is DMVPN ?


The complexity of DMVPN resides in the multitude of concepts involved in this technology: NHRP, mGRE, and IPSec. So to demystify the beast it crucial to enumerate the advantages, disadvantages, and conditions related to different NBMA topologies and their evolution.

Spokes with permanent public addresses

Hub and Spoke topology

Pro:

– Ease of configuration on spokes, only HUB parameters are configured and the HUB routes all traffic between spokes.

Con:

– Memory and CPU resource consumption on the HUB.

– Static configuration burdensome, prone to errors and hard to maintain for very large networks.

– Lack of scalability and flexibility.

– No security, network traffic is not protected.

Full/partial mesh topology

Pros:

– Each spoke is able to communicate with other spokes directly.

Cons:

– Static configuration burdensome, prone to errors and hard to maintain for very large networks.

– Lack of scalability and flexibility.

– Additional memory and CPU resource requirements on branch routers for just occasional and non-permanent spoke-to-spoke communications.

– No security, network traffic is not protected.

Point-to-point GRE

Pro:

– Lack of scalability: need static configuration between each spoke and the HUB in a HUB and Spoke topology; and between each pair of spokes in a full/partial mesh topology.

– GRE supports IP broadcast and multicast to the other end of the tunnel.

– GRE is a unicast protocol, so can be encapsulated using IPSec and provide routing/multicasting in a protected environment.

Cons:

– Lack of security.

Point-to-multipoint GRE

Pros:

– A single tunnel interface can terminate all GRE tunnels from all spokes.

– No configuration complexity and resolves the issue of memory allocations.

Cons:

– Lack of security.

Full/partial mesh topology + IPSec

Pros:

– Each spoke is able to communicate with other spokes directly.

– Security.

Cons:

– Static configuration burdensome, prone to errors and hard to maintain.

– Lack of scalability and flexibility.

– Additional memory and CPU resource requirements on branch routers for just occasional and non-permanent spoke-to-spoke communications.

– IPsec doesn’t support multicast/broadcast, so cannot deploy routing protocols.

– Need pre-configured access-list for interesting traffic that will trigger IPSec establishment so need manual intervention in case applications changes.

– IPSec establishment will take [1-10] seconds, hence packet drops in the beginning.

Hub and Spoke topology + IPSec

Pro:

– Ease of configuration on spokes, only HUB parameters are configured and the router routes all traffic between spokes.

Con:

– Memory and CPU resource consumption on the HUB.

– Static configuration burdensome, prone to errors and hard to maintain in very large networks.

– Lack of scalability and flexibility.

– IPSec doesn’t support multicast/broadcast, so cannot deploy routing protocols.

– IPSec needs pre-configured access-list for interesting traffic that will trigger IPSec establishment.

– IPSec establishment will take [1-10] seconds, so packet drops.

Point-to-point GRE + IPSec

Pro:

– Lack of scalability: need static configuration between each spoke and the HUB in a HUB and Spoke topology; and between each pair of spokes in a full/partial mesh topology.

– GRE supports IP broadcast and multicast to the other end of the tunnel.

– GRE is a unicast protocol, so can be encapsulated using IPSec and provide routing/multicasting in a protected environment.

– Security.

Cons:

– Need pre-configured access-list for interesting traffic that will trigger IPSec establishment so need manual intervention in case applications changes.

– IPSec establishment will take [1-10] seconds, hence packet drops in the beginning.

Point-to-multipoint GRE + IPSec

Pros:

– A single tunnel interface can terminate all GRE tunnels from all spokes.

– No configuration complexity and resolves the issue of memory allocations.

Cons:

– Need pre-configured access-list for interesting traffic that will trigger IPSec establishment so need manual intervention in case applications changes.

– IPSec establishment will take [1-10] seconds, hence packet drops in the beginning.

Spokes with dynamic public addresses

Issue:

Whether it is GRE, mGRE, Hub and Spoke, full mesh, on HUB or on spokes, tunnel establishment require pre-configured tunnel source and destination.

Here comes NHRP (Next-Hop Resolution Protocol).

NHRP is used by spokes when startup to provide the HUB with the dynamic public ip and the associated tunnel ip.

NHRP is used by the HUB to respond to spokes requests about each other public ip addresses.

So the overall solution will be Point-to-multipoint GRE + IPSec + NHRP which is called DMVPN (Dynamic Multipoint VPN).

You will find the previously mentioned topologies in the subcategory “DMVPN” of the category “Security”

I reserved a separated sub-category called “DMVPN” inside the parent category “Security” in which  I will post the previously mentioned topologies.

%d bloggers like this: