IPv6 Embedded RP


This tutorial treats IPv6 embedded-RP method, how it works and how it can optimize the deployment of IPv6 multicast.

Advertisements

Multicast mtrace command


This short post illustrates how the cisco command “mtrace” works.

Let’s consider the following topology  :

Multicast topology with successful RPF check.

With the multicast source at 10.0.0.1 and a multicast member at 20.0.0.1 requesting multicast content from 10.0.0.1.

10.0.0.1 and 20.0.0.1 can reach each other ONLY through R5-R2-R4 which also enabled for PIM.

According to : http://www.cisco.com/en/US/docs/ios/12_3/ipmulti/command/reference/ip3_m1g.html#wp1068645

Usage Guidelines

The trace request generated by the mtrace command is multicast to the multicast group to find the last hop router to the specified destination. The trace then follows the multicast path from destination to source by passing the mtrace request packet via unicast to each hop. Responses are unicast to the querying router by the first hop router to the source. This command allows you to isolate multicast routing failures.

mtrace

Let’s execute mtrace command from R1 :

mtrace 10.0.0.1 20.0.0.1 239.1.1.1

This will send multicast traffic from 10.0.0.1(multicast source) to 20.0.0.1 (multicast member) and then query back the source starting from the last hop router (directly connected to the member)

R1#mtrace 10.0.0.1 20.0.0.1 239.1.1.1Type escape sequence to abort.Mtrace from 10.0.0.1 to 20.0.0.1 via group 239.1.1.1From source (?) to destination (?)Querying full reverse path…0  20.0.0.1  ————> where mtrace started tracing back toward the multicast traffic source 10.0.0.1

-1  192.168.24.5 PIM  [10.0.0.0/24]  ————> 1st hop toward the multicast traffic source 10.0.0.1

-2  192.168.24.6 PIM  [10.0.0.0/24]  ————> 2nd hop toward the multicast traffic source 10.0.0.1

-3  192.168.52.5 PIM  [10.0.0.0/24]  ————> 3rd hop toward the multicast traffic source 10.0.0.1

-4  192.168.15.6 PIM  [10.0.0.0/24]  ————> last hop to which the multicast traffic source is connected

-5  10.0.0.1 ————> multicast traffic source itself

R1#

R4->R2->R5->R1 is the unicast traffic path

R1->R5->R2->R4 is the multicast traffic path.

This in an indication that the RPF check succeed!

We can also use ping <mgroup> that will generate the multicast traffic toward the multicast member and the result shows confirmation from the multicast member for receiving the multicast traffic.

R1#ping 239.1.1.1 source 10.0.0.1 repeat 9999999Type escape sequence to abort.Sending 9999999, 100-byte ICMP Echos to 239.1.1.1, timeout is 2 seconds:Packet sent with a source address of 10.0.0.1Reply to request 0 from 192.168.24.5, 88 ms   ——-> last hop Multicast router is confirming the reception

Reply to request 0 from 192.168.24.5, 92 ms  –——> last hop Multicast router is confirming the reception

Reply to request 1 from 192.168.24.5, 108 ms  ——-> last hop Multicast router  is confirming the reception

Reply to request 1 from 192.168.24.5, 108 ms  ——-> last hop Multicast router  is confirming the reception

Here is the result of mtrace in both cases

Here is  two ad-hoc explanatory videos illustrating both cases of successful and failed RPF check :

Case1 : unicast path = multicast path (successful RPF)

R1#mtrace 10.0.0.1 20.0.0.1 239.1.1.1Type escape sequence to abort.Mtrace from 10.0.0.1 to 20.0.0.1 via group 239.1.1.1From source (?) to destination (?)Querying full reverse path…

0  20.0.0.1

-1  192.168.24.5 PIM  [10.0.0.0/24]

-2  192.168.24.6 PIM  [10.0.0.0/24]

-3  192.168.52.5 PIM  [10.0.0.0/24]

-4  192.168.15.6 PIM  [10.0.0.0/24]

-5  10.0.0.1

R1#

Case2 : unicast path # multicast path (RPF failure)

R1#mtrace 10.0.0.1 20.0.0.1 239.1.1.1Type escape sequence to abort.Mtrace from 10.0.0.1 to 20.0.0.1 via group 239.1.1.1From source (?) to destination (?)

Querying full reverse path…

0  20.0.0.1

-1  192.168.24.5 None No route

R1#

the resulting traceroute request will report all RPF successes in the path back to the multicast source and stop there where an RPF failure.

Load balancing GRE tunneled multicast traffic


This Lab is a part of a whole bundle aimed to illustrate different methods used to load balance multicast traffic through multiple equal-cost paths. For each individual Lab the topology could be slightly modified to match the appropriate conditions:

  • Direct multicast Load Balancing through ECMP.
  • Load balancing GRE tunneled multicast traffic.
  • Load Balancing Multicast traffic through EtherChannel.
  • Load Balancing Multicast traffic through MLP.

To avoid direct load balancing using ECMP (Equal-Cost Multiple Path), it is possible to offload the load sharing to unicast traffic processing by encapsulating multicast into GRE tunnels in such way that the multipath topology (ramification nodes and parallel paths) are aware only of the unicast tunnel sessions.

Outline

Configuration

Routing

CEF

Tunneling and RPF check

Configuration check:

Layer3 Load balancing

Testing unicast CEF Load sharing

Simulate a path failure with each of the three paths

Increasing the number of sessions

Conclusion

Picture1 illustrates the topology used here; note the two ramification nodes R10 and R1 delimiting three parallel paths.

Pcicture1: Lab topology

Configuration

Routing

The routing protocol deployed is EIGRP; by default it allows four equal-cost paths (six configurable), if needed use “maximum-path <max>” command to allow more.

R2#sh ip pro

Routing Protocol is “eigrp 10”


Automatic network summarization is not in effect

Maximum path: 4

Routing for Networks:

The same autonomous system 50 is used everywhere with auto-summarization disabled and advertisement of the directly connected segments.

CEF

CEF is a key feature, because load balancing at the data plane uses FIB which is directly inspired from Control plane RIB, in addition to the adjacency table of course.

CEF allows:

Per-destination load-sharing

  • More appropriate for BIG number of (src-dst) sessions
  • Load balance individual sessions (src, dst) over multiple paths
  • Default for CEF

Per-Packet (Round-Robin distribution) load-sharing

  • Load balance individual packets for a given session over multiple paths
  • Not recommended for VoIP because of packet re-ordering.

Tunneling and RPF check:

– Unicast Routing protocol EIGRP will process the tunnel outer header with IPs from interfaces used for tunnel source and destination, that’s why only these subnets are advertised.

– Multicast routing protocol PIM will process tunnel inner header, for that reason PIM must be announced on the tunnel interface itself, not tunnel sources/destination interfaces.

– These two routing levels and their corresponding interfaces must be strictly separated to avoid RPF failure.

Tunneling:

On router « vhost_member » the unique physical interface fa0/0 cannot be used for tunneling, because incoming traffic is de-multiplexed to the appropriate tunnel using the tunnel source, that’s why three loopbacks are created to be used as source for each tunnel interface.

GRE Tunnels are formed between multicast source routers (SRC1, SRC2 and SRC3) and Last-hop PIM router “vhost_member”

First tunnel :

SRC1 router vhost_member router
tunnel interface tunnel1 tunnel2
ip 1.1.10.1/24 1.1.10.6/24
tunnel source fa0/0 loopback1
tunnel destination 6.6.6.10 10.0.1.1
mode GRE GRE

Second tunnel :

SRC2 router vhost_member router
tunnel interface tunnel1 tunnel2
ip 1.1.20.2/24 1.1.20.6/24
tunnel source fa0/0 loopback2
tunnel destination 6.6.6.20 10.0.2.2
mode GRE GRE

Third tunnel :

SRC3 router vhost_member router
tunnel interface tunnel1 tunnel3
ip 1.1.30.3/24 1.1.30.6/24
tunnel source fa0/0 loopback3
tunnel destination 6.6.6.30 10.0.3.3
mode GRE GRE

Picture2 tunneling:

SRC1 router :

interface Tunnel1

ip address 1.1.10.1 255.255.255.0

tunnel source FastEthernet0/0

tunnel destination 6.6.6.10

tunnel mode gre ip

SRC2 router :

interface Tunnel1

ip address 1.1.20.2 255.255.255.0

tunnel source FastEthernet0/0

tunnel destination 6.6.6.20

tunnel mode gre ip

SRC3 router :

interface Tunnel1

ip address 1.1.30.3 255.255.255.0

tunnel source FastEthernet0/0

tunnel destination 6.6.6.30

tunnel mode gre ip

vhost_member (Multicast last hop):

interface Tunnel1

ip address 1.1.10.6 255.255.255.0

tunnel source Loopback1

tunnel destination 10.0.1.1

interface Tunnel2

ip address 1.1.20.6 255.255.255.0

tunnel source Loopback2

tunnel destination 10.0.2.2

!

interface Tunnel3

ip address 1.1.30.6 255.255.255.0

tunnel source Loopback3

tunnel destination 10.0.3.3

Multicast source routers

Enable pim ONLY on tunnel interfaces:

SRC3:

ip multicast-routing

!

interface Tunnel1

ip pim dense-mode

SRC2:

ip multicast-routing

!

interface Tunnel1

ip pim dense-mode

SRC1:

ip multicast-routing

!

interface Tunnel1

ip pim dense-mode

Router “vhost_member”:

ip multicast-routing

interface Tunnel1

ip pim dense-mode

ip igmp join-group 239.1.1.1

!

interface Tunnel2

ip pim dense-mode

ip igmp join-group 239.2.2.2

!

interface Tunnel3

ip pim dense-mode

ip igmp join-group 239.3.3.3

PIM Check:

Gmembers#sh ip pim interface

Address          Interface                Ver/   Nbr    Query  DR     DR

Mode   Count  Intvl  Prior

1.1.10.6         Tunnel1 v2/D   1      30     1      0.0.0.0

1.1.20.6         Tunnel2 v2/D   1      30     1      0.0.0.0

1.1.30.6         Tunnel3 v2/D   1      30     1      0.0.0.0

Gmembers#

Multicast sources (Incoming interfaces) are reachable through tunnel interfaces (unicast routing outbound interfaces) from which the multicast is received:

Gmembers#sh ip route


Gateway of last resort is not set

1.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

D       1.1.1.1/32 [90/156160] via 10.0.0.21, 00:37:18, FastEthernet0/0

C       1.1.10.0/24 is directly connected, 
Tunnel1

C       1.1.20.0/24 is directly connected, 
Tunnel2

C       1.1.30.0/24 is directly connected, 
Tunnel3


Gmembers#sh ip mroute

IP Multicast Routing Table


(1.1.10.1, 239.1.1.1), 00:02:29/00:02:58, flags: LT

Incoming interface: Tunnel1, RPF nbr 0.0.0.0

Outgoing interface list:

Tunnel2, Forward/Dense, 00:02:29/00:00:00

Tunnel3, Forward/Dense, 00:02:29/00:00:00


(1.1.20.2, 239.2.2.2), 00:02:11/00:02:58, flags: LT

Incoming interface: Tunnel2, RPF nbr 0.0.0.0

Outgoing interface list:

Tunnel1, Forward/Dense, 00:02:11/00:00:00

Tunnel3, Forward/Dense, 00:02:11/00:00:00


(1.1.30.3, 239.3.3.3), 00:01:54/00:02:59, flags: LT

Incoming interface: Tunnel3, RPF nbr 0.0.0.0

Outgoing interface list:

Tunnel1, Forward/Dense, 00:01:54/00:00:00

Tunnel2, Forward/Dense, 00:01:54/00:00:00

Gmembers#

Depending on the complexity of your topology you may need to route statically or dynamically some subnets through the tunnel.

Configuration check:

First let’s start multicast advertisement:

SRC3#p 239.3.3.3 repeat 1000

Type escape sequence to abort.

Sending 1000, 100-byte ICMP Echos to 239.3.3.3, timeout is 2 seconds:

Reply to request 0 from 1.1.30.6, 84 ms

Reply to request 1 from 1.1.30.6, 84 ms

SRC2#ping 239.2.2.2 repeat 1000

Type escape sequence to abort.

Sending 1000, 100-byte ICMP Echos to 239.2.2.2, timeout is 2 seconds:

Reply to request 0 from 1.1.20.6, 128 ms

Reply to request 1 from 1.1.20.6, 104 ms

SRC1#p 239.1.1.1 repeat 1000

Type escape sequence to abort.

Sending 1000, 100-byte ICMP Echos to 239.1.1.1, timeout is 2 seconds:

Reply to request 0 from 1.1.10.6, 120 ms

Reply to request 1 from 1.1.10.6, 140 ms

From the multicast point of view (multicast source and members), traffic is forwarded through distinct PTP links (tunnels)

Picture3: Logical multicast topology

Note the multicast path, is not aware of any topology out of the tunnels through which it is advertised:

Gmembers#mtrace 1.1.10.1 1.1.10.6

Type escape sequence to abort.

Mtrace from 1.1.10.1 to 1.1.10.6 via RPF

From source (?) to destination (?)

Querying full reverse path…

0  1.1.10.6

-1  1.1.10.6 PIM  [1.1.10.0/24]

-2  1.1.10.1

Gmembers#

Gmembers#mtrace 1.1.20.2 1.1.20.6

Type escape sequence to abort.

Mtrace from 1.1.20.2 to 1.1.20.6 via RPF

From source (?) to destination (?)

Querying full reverse path…

0  1.1.20.6

-1  1.1.20.6 PIM  [1.1.20.0/24]

-2  1.1.20.2

Gmembers#

Gmembers#mtrace 1.1.30.6 1.1.30.3

Type escape sequence to abort.

Mtrace from 1.1.30.6 to 1.1.30.3 via RPF

From source (?) to destination (?)

Querying full reverse path…

0  1.1.30.3

-1  1.1.30.3 PIM  [1.1.30.0/24]

-2  1.1.30.6

Gmembers#

The following picture4 illustrates how intermediate routers (forming the parallel paths) consider only unicast sessions

Picture4: unicast sessions


Layer3 Load balancing

R10#sh ip route


6.0.0.0/32 is subnetted, 3 subnets

D       6.6.6.10 [90/158976] via 10.0.40.4, 02:13:11, Vlan104

[90/158976] via 10.0.30.3, 02:13:11, Vlan103

[90/158976] via 10.0.20.2, 02:13:11, Vlan102

D       6.6.6.20 [90/158976] via 10.0.40.4, 02:13:16, Vlan104

[90/158976] via 10.0.30.3, 02:13:16, Vlan103

[90/158976] via 10.0.20.2, 02:13:16, Vlan102

D       6.6.6.30 [90/158976] via 10.0.40.4, 02:13:17, Vlan104

[90/158976] via 10.0.30.3, 02:13:17, Vlan103

[90/158976] via 10.0.20.2, 02:13:17, Vlan102

Layer2, CEF, Load balance according to FIB (from RIB)

R10#

R10#

R10#sh ip cef 6.6.6.20 internal

6.6.6.20/32, version 57, epoch 0, per-destination sharing

0 packets, 0 bytes

via 10.0.40.4, Vlan104, 0 dependencies

traffic share 1

next hop 10.0.40.4, Vlan104

valid adjacency

via 10.0.30.3, Vlan103, 0 dependencies

traffic share 1

next hop 10.0.30.3, Vlan103

valid adjacency

via 10.0.20.2, Vlan102, 0 dependencies

traffic share 1

next hop 10.0.20.2, Vlan102

valid adjacency

0 packets, 0 bytes switched through the prefix

tmstats: external 0 packets, 0 bytes

internal 0 packets, 0 bytes

Load distribution: 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 (refcount 1)

Hash  OK  Interface                 Address         Packets

1     Y   Vlan104                   10.0.40.4             0

2     Y   Vlan103                   10.0.30.3             0

3     Y   Vlan102                   10.0.20.2             0

4     Y   Vlan104                   10.0.40.4             0

5     Y   Vlan103                   10.0.30.3             0

6     Y   Vlan102                   10.0.20.2             0

7     Y   Vlan104                   10.0.40.4             0

8     Y   Vlan103                   10.0.30.3             0

9     Y   Vlan102                   10.0.20.2             0

10    Y   Vlan104                   10.0.40.4             0

11    Y   Vlan103                   10.0.30.3             0

12    Y   Vlan102                   10.0.20.2             0

13    Y   Vlan104                   10.0.40.4             0

14    Y   Vlan103                   10.0.30.3             0

15    Y   Vlan102                   10.0.20.2             0

R10#

So far everything seems to work as expected, but the two ramification routers, R10 and R1, show that it is not exactly an even distribution per-destination (picture5).

Observe unicast path using traceroute

Gmembers#trace 10.0.1.1 source 6.6.6.10

Type escape sequence to abort.

Tracing the route to 10.0.1.1

10.0.0.21 44 msec 64 msec 24 msec

10.0.0.41 16 msec 24 msec 20 msec

10.0.30.30 20 msec 32 msec 20 msec

10.0.1.1 64 msec *  72 msec

Gmembers#

Gmembers#trace 10.0.2.2 source 6.6.6.20

Type escape sequence to abort.

Tracing the route to 10.0.2.2

10.0.0.21 40 msec 32 msec 16 msec

10.0.0.41 8 msec 88 msec 20 msec

10.0.30.30 80 msec 48 msec 12 msec

10.0.2.2 140 msec *  68 msec

Gmembers#

Gmembers#trace 10.0.3.3 source 6.6.6.30

Type escape sequence to abort.

Tracing the route to 10.0.3.3

10.0.0.21 56 msec 32 msec 24 msec

10.0.0.37 32 msec 120 msec 16 msec

10.0.40.40 56 msec 16 msec 100 msec

10.0.3.3 48 msec *  56 msec

Gmembers#

Picture5: traffic distribution for the three sessions


R10#sh ip cef exact-route 10.0.1.1 6.6.6.10 internal

10.0.1.1        -> 6.6.6.10       : Vlan102 (next hop 10.0.20.2)

Bucket 5 from 15, total 3 paths

R10#sh ip cef exact-route 10.0.2.2 6.6.6.20 internal

10.0.2.2        -> 6.6.6.20       : Vlan102 (next hop 10.0.20.2)

Bucket 5 from 15, total 3 paths

R10#sh ip cef exact-route 10.0.3.3 6.6.6.30 internal

10.0.3.3        -> 6.6.6.30       : Vlan104 (next hop 10.0.40.4)

Bucket 6 from 15, total 3 paths

R10#

R1#sh ip cef exact-route 6.6.6.10 10.0.1.1

6.6.6.10        -> 10.0.1.1       : FastEthernet0/0 (next hop 10.0.0.41)

R1#sh ip cef exact-route 6.6.6.20 10.0.2.2

6.6.6.20        -> 10.0.2.2       : FastEthernet0/0 (next hop 10.0.0.41)

R1#sh ip cef exact-route 6.6.6.30 10.0.3.3

6.6.6.30        -> 10.0.3.3       : FastEthernet2/0 (next hop 10.0.0.37)

R1#

According to Cisco Documentation, per-destination load balancing depends on statistical distribution of traffic and more appropriate for a big number of sessions.

Resisting the confirmation bias to justify these results, I decided to conduct a series of tests and see what they will lead to:

1) – Simulate a path failure with each of the three paths.

2) – Progressively increase the number of sessions.

Three paths are available for each destination prefix (used in tunneling)

Testing unicast CEF Load sharing:

1) – simulate a path failure with each of the three paths.

Normal situation with no failures:

R10#sh ip cef exact-route 10.0.1.1 6.6.6.10

10.0.1.1        -> 6.6.6.10       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.20

10.0.2.2        -> 6.6.6.20       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.30

10.0.3.3        -> 6.6.6.30       : Vlan104 (next hop 10.0.40.4)

R10#

Picture6: NO failure


Situation with R3 failure:

R10#sh ip cef exact-route 10.0.1.1 6.6.6.10

10.0.1.1        -> 6.6.6.10       : Vlan104 (next hop 10.0.40.4)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.20

10.0.2.2        -> 6.6.6.20       : Vlan104 (next hop 10.0.40.4)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.30

10.0.3.3        -> 6.6.6.30       : Vlan102 (next hop 10.0.20.2)

R10#

Picture7: failure of R3 path


Situation with R2 failure:

R10#sh ip cef exact-route 10.0.1.1 6.6.6.10

10.0.1.1        -> 6.6.6.10       : Vlan104 (next hop 10.0.40.4)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.20

10.0.2.2        -> 6.6.6.20       : Vlan104 (next hop 10.0.40.4)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.30

10.0.3.3        -> 6.6.6.30       : Vlan103 (next hop 10.0.30.3)

R10#

Picture8: failure of R2 path


Situation with R4 failure:

R10#sh ip cef exact-route 10.0.1.1 6.6.6.10

10.0.1.1        -> 6.6.6.10       : Vlan103 (next hop 10.0.30.3)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.20

10.0.2.2        -> 6.6.6.20       : Vlan103 (next hop 10.0.30.3)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.30

10.0.3.3        -> 6.6.6.30       : Vlan102 (next hop 10.0.20.2)

R10#

Picture9: failure of R4 path


2) – Increasing the number of sessions:

With 4 sessions:

R10#sh ip cef exact-route 10.0.1.1 6.6.6.10

10.0.1.1        -> 6.6.6.10       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.20

10.0.2.2        -> 6.6.6.20       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.30

10.0.3.3        -> 6.6.6.30       : Vlan104 (next hop 10.0.40.4)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.30

10.0.2.2        -> 6.6.6.30       : Vlan102 (next hop 10.0.20.2)

R10#

Picture10: Distribution with  a 4th session between 10.0.2.2 and 6.6.6.30


With 5 sessions:

R10#sh ip cef exact-route 10.0.1.1 6.6.6.10

10.0.1.1        -> 6.6.6.10       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.20

10.0.2.2        -> 6.6.6.20       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.30

10.0.3.3        -> 6.6.6.30       : Vlan104 (next hop 10.0.40.4)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.30

10.0.2.2        -> 6.6.6.30       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.10

10.0.3.3        -> 6.6.6.10       : Vlan103 (next hop 10.0.30.3)

R10#

Picture11: Distribution with  a 4th session between 10.0.3.3 and 6.6.6.10


With 6 sessions:

R10#sh ip cef exact-route 10.0.1.1 6.6.6.10

10.0.1.1        -> 6.6.6.10       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.20

10.0.2.2        -> 6.6.6.20       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.30

10.0.3.3        -> 6.6.6.30       : Vlan104 (next hop 10.0.40.4)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.30

10.0.2.2        -> 6.6.6.30       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.10

10.0.3.3        -> 6.6.6.10       : Vlan103 (next hop 10.0.30.3)

R10#sh ip cef exact-route 10.0.1.1 6.6.6.20

10.0.1.1        -> 6.6.6.20       : Vlan103 (next hop 10.0.30.3)

R10#

Picture12: Distribution with a 6th session between 10.0.1.1 and 6.6.6.20

With 7 sessions:

R10#sh ip cef exact-route 10.0.1.1 6.6.6.10

10.0.1.1        -> 6.6.6.10       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.20

10.0.2.2        -> 6.6.6.20       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.30

10.0.3.3        -> 6.6.6.30       : Vlan104 (next hop 10.0.40.4)

R10#sh ip cef exact-route 10.0.2.2 6.6.6.30

10.0.2.2        -> 6.6.6.30       : Vlan102 (next hop 10.0.20.2)

R10#sh ip cef exact-route 10.0.3.3 6.6.6.10

10.0.3.3        -> 6.6.6.10       : Vlan103 (next hop 10.0.30.3)

R10#sh ip cef exact-route 10.0.1.1 6.6.6.20

10.0.1.1        -> 6.6.6.20       : Vlan103 (next hop 10.0.30.3)

R10#sh ip cef exact-route 10.0.1.1 6.6.6.30

10.0.1.1        -> 6.6.6.30       : Vlan104 (next hop 10.0.40.4)

R10#

Picture13: Distribution with a 7th session between 10.0.1.1 and 6.6.6.30


Conclusion

The result of the test confirm that for destination-based CEF Load-sharing, the more sessions the better the load distribution.

MSDP and Inter-domain multicast


So far we have seen that PIM (Protocol Independent Multicast) can perfectly satisfy the need for multicast forwarding inside a single domain or autonomous system, which is not the case with multicast applications intended to provide services outside the boundary of a single autonomous system, here comes the role of protocols such MSDP (Multicast Source Discovery Protocol).

With PIM-SM a typical multicast framework inside a single domain is composed of one or more Rendez-Vous points, multicast sources and multicast receivers. Now let’s imagine a company specialized in Video-On Demand content with receivers across Internet, with a typical multicast framework inside each AS to be as close as possible to receivers and let’s suppose that multicast sources in one AS is no more available, and we know that RP is responsible for registering sources and linking them to receivers, so what if an RP in one AS can communicate with the RP in another AS and be able to register with its sources?

Well that’s exactly what MSDP is intended for: make multicast sources available for other AS receivers by communicating this information between RP in different autonomous systems.

In this lab two simplified autonomous systems are considered AS27011 and AS27022 with an RP and multicast source, serving the same group, in each(Figure1).

Figure1: Topology

The scenario is as follow:

  • First, R22 the multicast source is sending multicast traffic 224.1.1.1 to R2 with RP2 as the rendez-vous point inside AS27022.
  • Second, R22 stop sending the multicast traffic and R1 in AS27011 start sending the same multicast group 224.1.1.1

To successfully deploy MSDP (or any other technology or protocol) it is crucial to split the work into several steps and make sure that each step is working perfectly, this will dramatically reduce the time that would be spent troubleshooting issues accumulated with each layer.

  • Basic connectivity & reachability
  • Routing protocol: BGP configuration
  • multicast configuration
  • MSDP configuration
  1. Basic connectivity & reachability

Let’s start by configuring IGP (EIGRP) to insure connectivity between devices:

R1:

router eigrp 10
network 192.168.111.0

no auto-summary

RP1:

router eigrp 10

passive-interface Ethernet0/1

network 1.1.1.3 0.0.0.0

network 172.16.0.1 0.0.0.0

network 192.168.111.0

network 192.168.221.0

no auto-summary

RP2:

router eigrp 10

passive-interface Ethernet0/1

network 1.1.1.2 0.0.0.0

network 172.16.0.2 0.0.0.0

network 192.168.221.0

network 192.168.222.0

network 192.168.223.0

no auto-summary

IGP routing information should not leak between ASs, hence the need to set interfaces between ASs as passive so only networks carried by BGP will be reachable between AS27022 and AS27011.

R22:

router eigrp 10
network 22.0.0.0

network 192.168.223.0

no auto-summary

R2:

router eigrp 10
network 192.168.222.0

no auto-summary

  1. Routing protocol: BGP configuration

     

Do not forget to advertise multicast end-point subnets through BGP (10.10.10.0/24, 22.0.0.0/24 and 192.168.40.0/24) so they can be reachable between the two autonomous systems.

R1:

router bgp 27011
no synchronization

bgp log-neighbor-changes

network 10.10.10.0 mask 255.255.255.0

neighbor 192.168.111.11 remote-as 27011

no auto-summary

RP1:

router bgp 27011
no synchronization

bgp log-neighbor-changes

neighbor 1.1.1.2 remote-as 27022

neighbor 1.1.1.2 ebgp-multihop 2

neighbor 1.1.1.2 update-source Loopback0

neighbor 192.168.111.1 remote-as 27011

no auto-summary

ip route 1.1.1.2 255.255.255.255 192.168.221.22

RP2:

router bgp 27022
no synchronization

bgp log-neighbor-changes

neighbor 1.1.1.3 remote-as 27011

neighbor 1.1.1.3 ebgp-multihop 2

neighbor 1.1.1.3 update-source Loopback0

neighbor 192.168.222.2 remote-as 27022

neighbor 192.168.222.2 route-reflector-client

neighbor 192.168.223.33 remote-as 27022

neighbor 192.168.223.33 route-reflector-client

no auto-summary

ip route 1.1.1.3 255.255.255.255 192.168.221.11

eBGP is configured between loopback interfaces to match MSDP peer relationship, therefore static routes are added in both sides to reach those loopback interfaces.

R22:

router bgp 27022
no synchronization

network 22.0.0.0 mask 255.255.255.0

neighbor 192.168.223.22 remote-as 27022

no auto-summary

R2:

router bgp 27022
no synchronization

network 192.168.40.0

neighbor 192.168.222.22 remote-as 27022

no auto-summary

There is three methods to configure iBGP in AS 27022:

– enable BGP only on the border router RP2 and redistribute needed subnets into BGP (not straightforward).

– configure full mesh iBGP (not consistent with the phyisical topology which is linear).

– configure RP1 as Route reflector.

let’s retain the last option the most optimal in the current situation, whenever this option is possible you better start by considering it to allow more flexibility for future growth of your network, otherwise when things become more complicated you will have to reconfigure BGP from the scratch to use Route Reflector.

Monitoring:

R2:

R2#sh ip bgp
BGP table version is 10, local router ID is 1.1.1.4

Status codes: s suppressed, d damped, h history, * valid, > best, i – internal,

r RIB-failure, S Stale

Origin codes: i – IGP, e – EGP, ? – incomplete

Network Next Hop Metric LocPrf Weight Path

*>i10.10.10.0/24 192.168.221.11 0 100 0 27011 i

*>i22.0.0.0/24 192.168.223.33 0 100 0 i

*> 192.168.40.0 0.0.0.0 0 32768 i

R2#

R22:

R22(config-router)#do sh ip bgp
BGP table version is 8, local router ID is 22.0.0.1

Status codes: s suppressed, d damped, h history, * valid, > best, i – internal,

r RIB-failure, S Stale

Origin codes: i – IGP, e – EGP, ? – incomplete

Network Next Hop Metric LocPrf Weight Path

*>i10.10.10.0/24 192.168.221.11 0 100 0 27011 i

*> 22.0.0.0/24 0.0.0.0 0 32768 i

*>i192.168.40.0 192.168.222.2 0 100 0 i

R22(config-router)#

R1:

R1#sh ip bgp
BGP table version is 10, local router ID is 192.168.111.1

Status codes: s suppressed, d damped, h history, * valid, > best, i – internal,

r RIB-failure, S Stale

Origin codes: i – IGP, e – EGP, ? – incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.10.10.0/24 0.0.0.0 0 32768 i

*>i22.0.0.0/24 192.168.221.22 0 100 0 27022 i

*>i192.168.40.0 192.168.221.22 0 100 0 27022 i

R1#

  1. multicast configuration

R1:

ip multicast-routing
interface Ethernet0/0

ip pim sparse-dense-mode

interface Serial1/0

ip pim sparse-dense-mode

RP1:

ip multicast-routing
interface Loopback1

ip pim sparse-dense-mode

interface Ethernet0/1

ip pim sparse-dense-mode

ip pim send-rp-announce Loopback1 scope 64

ip pim send-rp-discovery scope 64

RP1 router is configured as the RP and mapping agent for the AS27011

RP2:

interface Loopback0
ip pim sparse-dense-mode

interface Ethernet0/1

ip pim sparse-dense-mode

interface Ethernet0/2

ip pim sparse-dense-mode

ip pim send-rp-announce Loopback1 scope 64

ip pim send-rp-discovery scope 64

RP2 router is configured as the RP and mapping agent for the AS27022

R2:

interface Ethernet0/0
ip pim sparse-dense-mode


ip igmp join-group 224.1.1.1

interface Serial1/0

ip pim sparse-dense-mode

Auto-RP is chosen to advertise RP information throughout all interfaces where PIM sparse-dense mode is enabled… including through the link between autonomous systems, this mean that group-to-RP mapping information will be advertised to other ASs PIM routers which can lead to confusion and the result is that a PIM router in one AS will receive information from its local RP telling that it is the RP responsible for a number of groups as well information from external RP announcing their information:

R2#
*Mar 1 02:58:01.315: Auto-RP(0): Received RP-discovery, from 192.168.221.11, RP_cnt 1, ht 181

*Mar 1 02:58:01.319: Auto-RP(0): Update (224.0.0.0/4, RP:172.16.0.1), PIMv2 v1

R2#

RP2(config-if)#

*Mar 1 02:44:05.615: %PIM-6-INVALID_RP_JOIN: Received (*, 224.1.1.1) Join from 0.0.0.0 for invalid RP 172.16.0.1

RP2(config-if)#

And this is not the kind of cooperation intended by MSDP, MSDP allow RPs on one AS to contact multicast sources in other AS, but still responsible for multicast forwarding inside its AS.

The solution is to block service groups 224.0.1.39 and 224.0.1.40 between the two ASs using multicast boundary filtering:

RP1:

access-list 10 deny
224.0.1.39
access-list 10 deny
224.0.1.40

access-list 10 permit any

interface Ethernet0/1


ip multicast boundary 10

RP2:

access-list 10 deny
224.0.1.39
access-list 10 deny
224.0.1.40

access-list 10 permit any

interface Ethernet0/1


ip multicast boundary 10

Multicast monitoring inside AS:

RP2(config)#
*Mar 1 03:26:17.731: Auto-RP(0): Build RP-Announce for
172.16.0.2, PIMv2/v1, ttl 64, ht 181

*Mar 1 03:26:17.735: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 03:26:17.739: Auto-RP(0): Send RP-Announce packet on Ethernet0/2

*Mar 1 03:26:17.743: Auto-RP(0): Send RP-Announce packet on
Serial1/0

*Mar 1 03:26:17.747: Auto-RP: Send RP-Announce packet on Loopback1

*Mar 1 03:26:17.747: Auto-RP(0): Received RP-announce, from 172.16.0.2, RP_cnt 1, ht 181

*Mar 1 03:26:17.751: Auto-RP(0): Added with (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1

*Mar 1 03:26:17.783: Auto-RP(0): Build RP-Discovery packet

RP2(config)#

RP2(config)#

*Mar 1 03:26:17.783: Auto-RP: Build mapping (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1,

*Mar 1 03:26:17.791: Auto-RP(0): Send RP-discovery packet on Ethernet0/2 (1 RP entries)

*Mar 1 03:26:17.799: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

RP2(config)#

RP2 the RP forAS27022 is properly announcing itself to the mapping agent (the same router) which in turn properly announcing RP to local PIM routers R22 and R2:

R22:
R22#

*Mar 1 03:26:24.339: Auto-RP(0): Received RP-discovery, from 192.168.223.22, RP_cnt 1, ht 181

*Mar 1 03:26:24.347: Auto-RP(0): Added with (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1

R22#

R22#sh ip pim rp mapp

PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4


RP 172.16.0.2 (?), v2v1

Info source: 192.168.223.22 (?), elected via Auto-RP

Uptime: 00:04:16, expires: 00:02:41

R22#

R2:
R2#

*Mar 1 03:27:14.175: Auto-RP(0): Received RP-discovery, from 192.168.222.22, RP_cnt 1, ht 181

*Mar 1 03:27:14.179: Auto-RP(0): Update (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1

R2#

R2#sh ip pim rp mapp

PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4


RP 172.16.0.2 (?), v2v1

Info source: 192.168.222.22 (?), elected via Auto-RP

Uptime: 00:09:22, expires: 00:02:37

R2#

  1. MSDP configuration

RP1:

ip msdp peer 1.1.1.2 connect-source Loopback0

RP2:

ip msdp peer 1.1.1.3 connect-source Loopback0

interface loo0 doesn’t need PIM to be enabled on it.

The MSDP peer ID have to match eBGP peer ID

RP1:

RP1#sh ip msdp summ
MSDP Peer Status Summary

Peer Address AS State Uptime/ Reset SA Peer Name

Downtime Count Count

1.1.1.2
27022 Up 01:14:58 0 0 ?

RP1#

RP1(config)#do sh ip bgp summ

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

1.1.1.2 4 27022 79 79 4 0 0 01:15:46 2

192.168.111.1 4 27011 80 80 4 0 0 01:16:03 1

RP1#

RP2:

RP2#sh ip msdp sum
MSDP Peer Status Summary

Peer Address AS State Uptime/ Reset SA Peer Name

Downtime Count Count

1.1.1.3
27011 Up 01:15:48 0 2 ?

RP2#

RP2(config)#

RP2(config)#do sh ip bgp sum

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

1.1.1.3 4 27011 80 80 4 0 0 01:16:33 1

192.168.222.2 4 27022 81 82 4 0 0 01:16:43 1

192.168.223.33 4 27022 81 82 4 0 0 01:16:44 1

RP2#

R22#ping
Protocol [ip]:

Target IP address: 224.1.1.1

Repeat count [1]: 1000000

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Interface [All]: Ethernet0/0

Time to live [255]:

Source address: Loopback2

Type of service [0]:

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 1000000, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Packet sent with a source address of 22.0.0.1

Reply to request 0 from 192.168.222.2, 216 ms

Reply to request 1 from 192.168.222.2, 200 ms

Reply to request 2 from 192.168.222.2, 152 ms

Reply to request 3 from 192.168.222.2, 136 ms

Reply to request 4 from 192.168.222.2, 200 ms

Reply to request 5 from 192.168.222.2, 216 ms

After generating a multicast routing from R22 loo2 interface to the group 224.1.1.1 you can note from the result of the previous extended ping that R2 is responding to it.

RP2#sh ip mroute 224.1.1.1
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:17:35/00:02:38, RP 172.16.0.2, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:


Serial1/0, Forward/Sparse-Dense, 00:17:35/00:02:38

(22.0.0.1, 224.1.1.1), 00:05:57/00:03:28, flags: T


Incoming interface: Ethernet0/2, RPF nbr 192.168.223.33

Outgoing interface list:


Serial1/0, Forward/Sparse-Dense, 00:05:57/00:03:28

RP2#

RP on AS27022 has already built the shared tree with the receiver (*, 224.1.1.1) and registered for the source tree with PIM-DR sending router (22.0.0.1, 224.1.1.1).

Note that the RPF interface that is used to reach the source 22.0.0.1 is e0/2

Now let’s suppose that for some reasons the source R22 stopped sending the multicast group to 224.1.1.1 and in the neighbor AS 27022 a source begin to send multicast traffic to the same group.

RP2#*Mar 1 01:32:42.051: MSDP(0): Received 32-byte TCP segment from 1.1.1.3

*Mar 1 01:32:42.055: MSDP(0): Append 32 bytes to 0-byte msg 102
from 1.1.1.3, qs 1

RP2#

RP2 msdp has received SA message from the MSDP peer at RP1 to inform it about its local source, the group and RP as show in the following output:

RP2#sh ip msdp saMSDP Source-Active Cache – 2 entries

(10.10.10.3, 224.1.1.1), RP 172.16.0.1, BGP/AS 0, 00:26:31/00:05:46, Peer 1.1.1.3

(192.168.111.1, 224.1.1.1), RP 172.16.0.1, BGP/AS 0, 00:26:31/00:05:46, Peer 1.1.1.3

RP2#

RP2#sh ip mroute 224.1.1.1
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 01:34:23/00:03:25, RP 172.16.0.2, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/0, Forward/Sparse-Dense, 01:34:23/00:03:25

(10.10.10.3, 224.1.1.1), 00:30:13/00:03:21, flags: MT


Incoming interface: Ethernet0/1, RPF nbr 192.168.221.11

Outgoing interface list:


Serial1/0, Forward/Sparse-Dense, 00:30:13/00:03:25

(192.168.111.1, 224.1.1.1), 00:30:13/00:03:21, flags:

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/0, Forward/Sparse-Dense, 00:30:13/00:03:24

RP2#

Note the new entry for the group 224.1.1.1 which is (10.10.10.3, 224.1.1.1) flagged as “M” for MSDP created entry and “T” telling that packets has been received on this SPT.

The incoming interface connect the RPF neighbor RP2 towards the source 10.10.10.3 and the outgoing interface send traffic to R2.

RP1#sh ip mroute 224.1.1.1
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:34:29/stopped, RP 172.16.0.1, flags: SP

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list: Null

(10.10.10.3, 224.1.1.1), 00:31:27/00:02:25, flags: TA


Incoming interface: Serial1/0, RPF nbr 192.168.111.1

Outgoing interface list:


Ethernet0/1, Forward/Sparse-Dense, 00:30:31/00:02:50

(192.168.111.1, 224.1.1.1), 00:34:29/00:02:55, flags: PTA

Incoming interface: Serial1/0, RPF nbr 0.0.0.0

Outgoing interface list: Null

RP1#

RP1#

the entry (10.10.10.3, 224.1.1.1) has serial1/0 as incoming interface connected to the RPF neighbor which is the source R1 and forward the traffic to AS27022 to RP2.

“T” for packets has been received on this entry and the RP (MSDP) consider this SPT to be a candidate for MSDP advertisement to other AS.

Multicast over FR NBMA part4 – (multipoint GRE and DMVPN)


This is the fourth part of the document “Multicast over FR NBMA”, this lab focus on deploying multicast over multipoint GRE and DMVPN.

The main advantage of GRE tunneling is its transportation capability, non-ip, broadcast and multicast traffic can be encapsulated inside the unicast GRE which is easily transmitted over Layer2 technologies such Frame Relay and ATM.

Because HUB, SpokeA and SpokeB FR interfaces are in multipoint, we will use multipoint GRE.

Figure1 : lab topology


CONFIGURATION

mGRE configuration:

HUB:

interface Tunnel0
ip address 172.16.0.1 255.255.0.0
no ip redirects

!!PIM sparse-dense mode is enabled on the tunnel not on the physical interface


ip pim sparse-dense-mode

!! a shared key is used for tunnel authentication


ip nhrp authentication cisco

!!The HUB must send all multicast traffic to all spokes that has registered to it


ip nhrp map multicast dynamic

!! Enable NHRP on the interface, must be the same for all participants


ip nhrp network-id 1

!!Because the OSPF network type is broadcast a DR will be elected, so the HUB is assigned the biggest priority to be sure that it will be the DR


ip ospf network broadcast


ip ospf priority 10

!! With small HUB and Spoke networks it is possible to configure static mGRE by pre-configuring the tunnel destination, but will not be able to set the tunnel mode


tunnel source Serial0/0


tunnel mode gre multipoint

!! Set the tunnel identification key and must be identical to the network-id previously configured


tunnel key 1

FR configuration:

interface Serial0/0
ip address 192.168.100.1 255.255.255.0
encapsulation frame-relay

serial restart-delay 0

frame-relay map ip 192.168.100.2 101
broadcast

frame-relay map ip 192.168.100.3 103
broadcast

no frame-relay inverse-arp

Routing configuration:

router ospf 10
router-id 1.1.1.1
network 10.10.20.0 0.0.0.255 area 100


network 172.16.0.0 0.0.255.255 area 0

SpokeA:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.2 255.255.0.0
ip nhrp authentication cisco

!!All multicast traffic will be forwarded to the NBMA next hop IP (HUB).

ip nhrp map multicast 192.168.100.1

!!All spokes know in advance the HUB NBMA and tunnel IP addresses which are static.

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network point-to-multipoint

tunnel source Serial0/0.201

tunnel destination 192.168.100.1

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.201 multipoint

ip address 192.168.100.2 255.255.255.0

frame-relay map ip 192.168.100.1 201 broadcast

Routing configuration:

router ospf 10
router-id 200.200.200.200
network 20.20.20.0 0.0.0.255 area 200

network 172.16.0.0 0.0.255.255 area 0

SpokeB:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.3 255.255.0.0
no ip redirects

ip pim sparse-dense-mode

ip nhrp authentication cisco

ip nhrp map multicast 192.168.100.1

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network broadcast

ip ospf priority 0

tunnel source Serial0/0.301

tunnel mode gre multipoint

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.301 multipoint

ip address 192.168.100.3 255.255.255.0

frame-relay map ip 192.168.100.1 301 broadcast

Routing configuration:

router ospf 10
router-id 3.3.3.3

network 172.16.0.0 0.0.255.255 area 0

network 192.168.39.0 0.0.0.255 area 300

RP (SpokeBnet):

interface Loopback0
ip address 192.168.38.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 192.168.38.1 0.0.0.0 area 300

ip pim send-rp-announce Loopback0 scope 32

Mapping Agent (HUBnet):

interface Loopback0
ip address 10.0.0.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 10.0.0.1 0.0.0.0 area 100

ip pim send-rp-discovery Loopback0 scope 32

Here is the result:

HUB:

HUB# sh ip nhrp
172.16.0.2/32 via 172.16.0.2, Tunnel0 created 01:06:52, expire 01:34:23
Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.2

172.16.0.3/32 via 172.16.0.3, Tunnel0 created 01:06:35, expire 01:34:10

Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.3

HUB#

The HUB has dynamically learnt spoke’s NBMA addresses and corresponding tunnel ip addresses.

HUB#sh ip route
Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is not set

 

1.0.0.0/32 is subnetted, 1 subnets

C 1.1.1.1 is directly connected, Loopback0

20.0.0.0/32 is subnetted, 1 subnets

O IA 20.20.20.20 [110/11112] via 172.16.0.2, 01:08:26, Tunnel0

O IA 192.168.40.0/24 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

C 172.16.0.0/16 is directly connected, Tunnel0

192.168.38.0/32 is subnetted, 1 subnets

O IA 192.168.38.1 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

O IA 192.168.39.0/24 [110/11112] via 172.16.0.3, 01:08:26, Tunnel0

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O 10.0.0.2/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.10.10.0/24 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.0.0.1/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

C 10.10.20.0/24 is directly connected, FastEthernet1/0

C 192.168.100.0/24 is directly connected, Serial0/0

HUB#

The HUB has learnt all spokes local network ip addresses; note that all learnt routes points to the tunnel IP addresses, because the routing protocol is enabled on the top of the logical topology not the physical (figure2).

Figure2 : Logical topology

HUB#sh ip pim neighbors
PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR

Address Prio/Mode

172.16.0.3
Tunnel0 01:06:17/00:01:21 v2 1 / DR S

172.16.0.2
Tunnel0 01:06:03/00:01:40 v2 1 / S

10.10.20.3 FastEthernet1/0 01:07:24/00:01:15 v2 1 / DR S

HUB#

PIM neighbor relationships are established after enabling PIM-Sparse-dense mode on tunnel interfaces.

SpokeBnet#
*Mar 1 01:16:22.055: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181
*Mar 1 01:16:22.059: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

*Mar 1 01:16:22.067: Auto-RP: Send RP-Announce packet on Loopback0

SpokeBnet#

The RP (SpokeBnet) send RP-announces to all those who listen to 224.0.1.39

Hubnet#
*Mar 1 01:16:17.039: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181
*Mar 1 01:16:17.043: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

Hubnet#

*Mar 1 01:16:49.267: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:16:49.271: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

Hubnet#

HUBnet, the mapping agent (MA), listening to 224.0.1.39, has received RP-announces from the RP (SpokeBnet), has updated its records and has sent RP-Discovery to all PIM-SM routers at 224.0.1.40

HUB#
*Mar 1 01:16:47.059: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181
*Mar 1 01:16:47.063: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

HUB#

 

HUB#sh ip pim rp
Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 01:11:49, expires 00:02:44
HUB#

The HUB, as an example, has received the RP-to-group mapping information from the Mapping agent and now know the RP IP address.

Now let’s take a look at the multicast routing table of the RP:

SpokeBnet#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:39:00/stopped, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(10.10.10.1, 239.255.1.1), 00:39:00/00:02:58, flags: T


Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1


Outgoing interface list:


FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(*, 224.0.1.39), 01:24:31/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(192.168.38.1, 224.0.1.39), 01:24:31/00:02:28, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(*, 224.0.1.40), 01:25:42/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:25:40/00:00:00

Loopback0, Forward/Sparse-Dense, 01:25:42/00:00:00

 

(10.0.0.1, 224.0.1.40), 01:23:39/00:02:51, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 01:23:39/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – The shared tree, rooted at the RP, used to push multicast traffic to receivers, “J” flag indicates that traffic has switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – SPT used to forward traffic from the source to the receiver, receive traffic on Fa0/0 ans forward it out of Fa1/0.

(*, 224.0.1.39) and (*, 224.0.1.40) – service group multicast, because it is a PIM sparse-dense mode, traffic for these groups were forwarded to all PIM routers using dense mode, hence the flag “D”.

This way we configured multicast over NBMA using mGRE, no layer2, no restrictions.

By the way, we are just one step far from DMVPN 🙂 all we have to do is configure IPSec VPN that will protect our mGRE tunnel, so let’s do it!

!! IKE phase I parameters
crypto isakmp policy 1
!! 3des as the encryption algorithm

encryption 3des

!! authentication type: simple preshared keys

authentication pre-share

!! Diffie Helman group2 for the exchange of the secret key

group 2

!! isakmp pees are not set because the HUB doesn’t know them yet, they are learned dynamically by NHRP within mGRE

crypto isakmp key cisco address 0.0.0.0 0.0.0.0

crypto ipsec transform-set MyESP-3DES-SHA esp-3des esp-sha-hmac

mode transport

crypto ipsec profile My_profile

set transform-set MyESP-3DES-SHA

 

int tunnel 0

tunnel protection ipsec profile My_profile

 

HUB#sh crypto isakmp sa
dst src state conn-id slot status
192.168.100.1 192.168.100.2 QM_IDLE 2 0 ACTIVE

192.168.100.1 192.168.100.3 QM_IDLE 1 0 ACTIVE

 

HUB#

 

HUB#sh crypto ipsec sa
 interface: Tunnel0

Crypto map tag: Tunnel0-head-0, local addr 192.168.100.1

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.2/255.255.255.255/47/0)


current_peer 192.168.100.2 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1248, #pkts encrypt: 1248, #pkts digest: 1248

#pkts decaps: 129, #pkts decrypt: 129, #pkts verify: 129

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 52, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.2

path mtu 1500, ip mtu 1500

current outbound spi: 0xCEFE3AC2(3472767682)

 

inbound esp sas:


spi: 0x852C8AE0(2234288864)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2003, flow_id: SW:3, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4448676/3482)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xCEFE3AC2(3472767682)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2004, flow_id: SW:4, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4447841/3479)

IV size: 8 bytes

replay detection support: Y

Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.3/255.255.255.255/47/0)

current_peer 192.168.100.3 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1309, #pkts encrypt: 1309, #pkts digest: 1309

#pkts decaps: 23, #pkts decrypt: 23, #pkts verify: 23

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 26, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.3

path mtu 1500, ip mtu 1500

current outbound spi: 0xD5D509D2(3587508690)

 

inbound esp sas:


spi: 0x4507681A(1158113306)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2001, flow_id: SW:1, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4588768/3477)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xD5D509D2(3587508690)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2002, flow_id: SW:2, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4587889/3476)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

HUB#

ISAKMP and IPSec phases are successfully established and security associations are formed.

multicast over DMVPN works perfectly! That’s it!

Multicast over FR NBMA part3 – (PIM-NBMA mode and Auto-RP)


In this third part of the document “Multicast over FR NBMA” we will see how with an RP in one spoke network and MA in the central site we can defeat the issue of Dense mode and forwarding service groups 224.0.1.39 and 224.0.1.40 from one spoke to another.

Such placement of the MA in the central site as a proxy, is ideal to insure the communication between RP and all spokes through separated PVCs.

In this LAB the RP is configured in SpokeBnet (SpokeB site) and the mapping agent in Hubnet (central site).

Figure1: Lab topology


CONFIGURATION

!! All interface PIM mode on all routers are set to “sparse-dense”

ip pim sparse-dense-mode

Configure SpokeBnet to become an RP

pokeBnet(config)#ip pim send-rp-announce Loopback0 scope 32

Configure Hubnet to become a mapping agent

Hubnet(config)#ip pim send-rp-discovery loo0 scope 32

And the result across FR is as follow:

SpokeBnet:

*Mar 1 01:02:03.083: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181

*Mar 1 01:02:03.087: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:02:03.091: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:02:03.091: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

The RP announce itself to all mapping agents in the network (the multicast group 224.0.1.39)

HUBnet:

Hubnet(config)#

*Mar 1 01:01:01.487: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181

*Mar 1 01:01:01.491: Auto-RP(0): Added with (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

*Mar 1 01:01:01.523: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:01:01.523: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:01:01.527: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:01:01.535: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

*Mar 1 01:01:01.539: Auto-RP: Send RP-discovery packet on Loopback0 (1 RP entries)

Hubnet(config)#

The mapping agent has received RP-announces from RP, update its records and send the Group-to-RP information (in this case: RP is responsible for all multicast groups) to the destination multicast 224.0.1.40 (all PIM routers)

SpokeA#

*Mar 1 01:01:53.379: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181

*Mar 1 01:01:53.383: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

SpokeA#

 

SpokeA#sh ip pim rp

Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 00:18:18, expires 00:02:32

SpokeA#

As an example SpokeA across the HUB and FR cloud has received the Group-to-RP mapping information and updates its records.

SpokeBnet#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:35:35/00:03:25, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:06:59/00:02:39

FastEthernet0/0, Forward/Sparse-Dense, 00:35:35/00:03:25

 

(10.10.10.1, 239.255.1.1), 00:03:43/00:02:47, flags: T

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:03:43/00:02:39

 

(*, 224.0.1.39), 00:41:36/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 00:41:36/00:00:00

 

(192.168.38.1, 224.0.1.39), 00:41:36/00:02:23, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 00:41:37/00:00:00

 

(*, 224.0.1.40), 01:03:55/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:03:50/00:00:00

FastEthernet1/0, Forward/Sparse-Dense, 01:03:55/00:00:00

 

(10.0.0.1, 224.0.1.40), 00:35:36/00:02:07, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:35:36/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – This is the shared tree built between the receiver and rooted at the RP (incoming interface=null), the flag “J” indicates that the traffic was switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – is the SPT in question receiving traffic through Fa0/0 and forwarding it to Fa1/0.

Both (*, 224.0.1.39) and (*.224.0.1.40) are flagged with “D” which means that they were multicasted using dense mode, this is the preliminary operation of auto-RP.

SpokeBnet#mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM Reached RP/Core [10.10.10.0/24]

-2 192.168.39.1 PIM [10.10.10.0/24]

-3 192.168.100.1 PIM [10.10.10.0/24]

-4 10.10.20.3 PIM [10.10.10.0/24]

SpokeBnet#

ClientB is receiving the multicast traffic as needed.

The following outputs show how SpokeA networks also receive perfectly the multicast traffic 239.255.1.1:

SpokeA# mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

-3 10.10.20.3 PIM [10.10.10.0/24]

-4 10.10.10.1

SpokeA#

 

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:59:33/stopped, RP 192.168.38.1, flags: SJCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:59:33/00:02:01

 

(10.10.10.1, 239.255.1.1), 00:03:23/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:03:23/00:02:01

 

(*, 224.0.1.40), 00:59:33/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial0/0.201, Forward/Sparse-Dense, 00:59:33/00:00:00

 

(10.0.0.1, 224.0.1.40), 00:44:18/00:02:15, flags: PLTX

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list: Null

 

SpokeA#

One more time Auto-RP in action: PIM has switched from RPT (*, 239.255.1.1) – “J” to SPT (10.10.10.1, 239.255.1.1) – “TJ”

Multicast over FR NBMA part2 – (PIM NBMA mode and static RP)


This is the second part of the document “multicast over FR NBMA”, this lab focus on the deployment of PIM NBMA mode in a HUB and Spoke FR NBMA network with static Rendez-vous Point configuration.

Figure1 illustrates the Lab topology used, the HUB is connected to the FR NBMA through the main physical interface, SpokeA and SpokeB are connected through multipoint sub-interface, the FR NBMA is a partial mesh topology with two PVCs from each spoke to the HUB main physical interface.

Figure1 : Lab topology

I – SCENARIO I

PIM-Sparse mode is used with Static RP role played by the HUB.

1 – CONFIGURATION

HUB:

Frame Relay configuration:

interface Serial0/0

ip address 192.168.100.1 255.255.255.0

encapsulation frame-relay

!! Inverse ARP is disabled

no frame-relay inverse-arp

!! OSPF protocol have to be consistent with the Frame Relay !!topology, in this case it is HUB & Spoke in which traffic destined !!to spokes will be forwarded to the HUB then from there to the !!spoke.


ip ospf network point-to-multipoint

!! With inverse ARP disabled you have to configure FR mapping and do !!not forget to enable “pseudo broadcasting” at the end of frame !!relay maps

frame-relay map ip 192.168.100.2 101 broadcast

frame-relay map ip 192.168.100.3 103 broadcast

Multicast configuration:

interface Loopback0

ip address 192.168.101.1 255.255.255.255

!! Enable Sparse mode on all interfaces

ip pim sparse-mode

interface Serial0/0

!! Enable PIM NBMA mode


ip pim nbma-mode

ip pim sparse-mode

!! Enable multicast routing, the IOS will alert you if you are !!trying to use multicast commands without if you forgot it

ip multicast-routing

ip pim rp-address 192.168.101.1

Routing configuration:

router ospf 10

network 10.10.10.0 0.0.0.255 area 100


network 192.168.100.0 0.0.0.255 area 0

SpokeA:

Frame Relay configuration:

interface Serial0/0

no ip address

encapsulation frame-relay

no frame-relay inverse-arp

interface Serial0/0.201 multipoint

ip address 192.168.100.2 255.255.255.0

frame-relay map ip 192.168.100.1 201 broadcast 

Multicast configuration:

interface Serial0/0.201 multipoint

ip pim nbma-mode

ip pim sparse-mode

ip pim rp-address 192.168.101.1

ip multicast-routing 

Routing protocol configuration:

router ospf 10

network 20.20.20.0 0.0.0.255 area 200

network 192.168.100.0 0.0.0.255 area 0

interface Serial0/0.201 multipoint

ip ospf network point-to-multipoint 

SpokeB:

Frame Relay configuration:

interface Serial0/0

no ip address

encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

interface Serial0/0.301 multipoint

ip address 192.168.100.3 255.255.255.0

frame-relay map ip 192.168.100.1 301 broadcast 

Multicast configuration:

ip pim rp-address 192.168.101.1

ip multicast-routing

interface Serial0/0.301 multipoint

ip pim sparse-mode

ip pim nbma-mode 

Routing Protocol configuration:

router ospf 10

network 192.168.40.0 0.0.0.255 area 300

network 192.168.100.0 0.0.0.255 area 0

interface Serial0/0.301 multipoint

ip ospf network point-to-multipoint

 

2 – ANALYSIS

SpokeB:

SpokeB#sh ip route

Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP

D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is not set

 

20.0.0.0/32 is subnetted, 1 subnets

O IA 20.20.20.20 [110/129] via 192.168.100.1, 00:01:16, Serial0/0.301

C 192.168.40.0/24 is directly connected, FastEthernet1/0

10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks

O IA 10.0.0.2/32 [110/66] via 192.168.100.1, 00:01:16, Serial0/0.301

O IA 10.10.10.0/24 [110/65] via 192.168.100.1, 00:01:16, Serial0/0.301

O IA 10.0.0.1/32 [110/66] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.102.0/32 is subnetted, 1 subnets

O 192.168.102.1 [110/65] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.100.0/24 is variably subnetted, 3 subnets, 2 masks

C 192.168.100.0/24 is directly connected, Serial0/0.301

O 192.168.100.1/32 [110/64] via 192.168.100.1, 00:01:16, Serial0/0.301

O 192.168.100.2/32 [110/128] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.101.0/32 is subnetted, 1 subnets

O 192.168.101.1 [110/65] via 192.168.100.1, 00:01:17, Serial0/0.301

SpokeB#

The connectivity is successful and SpokeB has received correct routing information about the network

SpokeB#sh ip pim rp

Group: 239.255.1.1, RP: 192.168.101.1, uptime 00:17:31, expires
never

Group: 224.0.1.40, RP: 192.168.101.1, uptime 00:17:31, expires never

SpokeB# 

Because we use PIM-SM with static RP, the group-to-RP mapping will never expire

SpokeB#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:46:09/stopped, RP 192.168.101.1, flags: SJC

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse, 00:46:09/00:02:38

 

(10.10.10.1, 239.255.1.1), 00:22:22/00:02:50, flags: JT

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse, 00:22:22/00:02:38

 

(*, 224.0.1.40), 00:18:10/00:02:24, RP 192.168.101.1, flags: SJCL

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

Serial0/0.301, 192.168.100.3, Forward/Sparse, 00:14:36/00:00:21

 

SpokeB# 

 

SpokeB#mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM Reached RP/Core [10.10.10.0/24]

SpokeB# 

According to the two previous outputs, The spokeB (PIM-DR) has joined the shared (RPT) tree rooted at the RP/Core (the HUB) and the traffic across this shared tree is entering the FR interface and forwarded to the LAN interface Fa1/0.

Because it is a static RP configuration there is only one multicast group service 224.0.1.40 through which a shared tree is built for PIM-SM routers to communicate with the RP.

HUB:

HUB#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:48:20/00:03:22, RP 192.168.101.1, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.3, Forward/Sparse, 00:48:20/00:03:22


Serial0/0, 192.168.100.2, Forward/Sparse, 00:48:20/00:03:09

 

(10.10.10.1, 239.255.1.1), 00:25:33/00:03:26, flags: T

Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:25:33/00:03:08


Serial0/0, 192.168.100.3, Forward/Sparse, 00:25:33/00:03:22

 

(10.10.10.3, 239.255.1.1), 00:00:22/00:02:37, flags: T

Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:00:22/00:03:08


Serial0/0, 192.168.100.3, Forward/Sparse, 00:00:22/00:03:22

 

(192.168.100.1, 239.255.1.1), 00:00:23/00:02:36, flags:

Incoming interface: Serial0/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:00:23/00:03:07


Serial0/0, 192.168.100.3, Forward/Sparse, 00:00:23/00:03:20

 

(*, 224.0.1.40), 00:32:02/00:03:09, RP 192.168.101.1, flags: SJCL

Incoming interface: Null, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:32:00/00:03:09


Serial0/0, 192.168.100.3, Forward/Sparse, 00:32:02/00:02:58

 

HUB# 

Note that for all RPT and SPT the HUB consider two “next-hops” out of the same Serial0/0 interface, this is the work of PIM NBMA mode that makes the layer3 PIM aware of the real layer 2 topology which is not Broadcast but separated “point-to-point PVCs”, as against using only pseudo broadcast that will give the layer 3 an illusion of multi-access network.

Because the shared tree (*,239.255.1.1) is originated locally, the incoming interface list is null.

A source routed tree (10.10.10.1, 239.255.1.1) is also built between the source and the RP.

SpokeA:

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:07:17/stopped, RP 192.168.101.1, flags: SJCLF

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:07:17/00:02:39

 

(10.10.10.1, 239.255.1.1), 00:43:08/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:43:08/00:02:39

 

(*, 224.0.1.40), 00:38:39/00:02:42, RP 192.168.101.1, flags: SJCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:38:39/00:02:42

 

SpokeA# 

 

SpokeA#mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM Reached RP/Core [10.10.10.0/24]

SpokeA# 

SpokeA has received the first packet through the RP, then switched to SPT (10.10.10.1, 239.255.1.1).

 

II – Scenario II

In this scenario the SpokeB play the role of the RP.

1 – CONFIGURATION

Multicast configuration:

The new static RP should be configured on all routers, 192.168.103.1 is a SpokeB loopback interface and it is advertised and reached from everywhere.

SpokeB:

ip pim rp-address 192.168.103.1

 

SpokeB(config)#do mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM Reached RP/Core [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

SpokeB(config)# 

Now the multicast traffic received by SpokeB client is forwarded through the new RP (SpokeB) which is in the best path from the source to the receiver.

SpokeA:

SpokeA#mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

-3 10.10.10.1

SpokeA# 

 

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:24:27/stopped, RP 192.168.103.1, flags: SJCLF

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback0, Forward/Sparse, 01:24:27/00:02:29

 

(10.10.10.1, 239.255.1.1), 01:00:18/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:00:18/00:02:29

 

(*, 224.0.1.40), 00:06:18/00:02:37, RP 192.168.103.1, flags: SJCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial0/0.201, 192.168.100.1, Forward/Sparse, 00:06:02/00:00:37

Loopback0, Forward/Sparse, 00:06:18/00:02:37

 

SpokeA# 

It seems that SpokeA has switched from RPT to SPT (10.10.10.1,239.255.1.1) (SJCLF: Sparse mode, join SPT, Register Flag and locally connected requester)

By the way let’s see what will happen if we disable the SPT switchover by providing the following command at SpokeA:

ip pim spt-threshold infinity

 

SpokeA(config)#do sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:44:47/00:02:59, RP 192.168.103.1, flags: SCLF

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:44:47/00:02:17

 

(*, 224.0.1.40), 00:26:39/00:02:11, RP 192.168.103.1, flags: SCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:26:39/00:02:11

 

SpokeA(config)# 

No more SPT! only shared tree (*.239.255.1.1) rooted at the RP (spokeB)

%d bloggers like this: