IPv6 multicast over IPv6 IPSec VTI


IPv4 IPSec doesn’t support multicast, we need to use GRE (unicast) to encapsulate multicast traffic and encrypt it. As a consequence, more complication and an additional level of routing, so less performance.

One of the advantages of IPv6 is the support of IPSec authentication and encryption (AH, ESP) right in the extension headers, which makes it natively support IPv6 multicast.

In this lab we will be using IPv6 IPSec site-to-site protection using VTI to natively support IPv6 multicast.

The configuration involves three topics: IPv6 routing, IPv6 IPSec and IPv6 multicast. Each process is built on the top the previous one, so before touching IPsec, make sure you have local connectivity for each segment of the network and complete reachability through IPv6 routing.

Next step, you can move to IPv6 IPSec and change routing configuration accordingly (through VTI).

IPv6 multicast relies on a solid foundation of unicast reachability, so once you have routes exchanged between the two sides through the secure connection you can start configuring IPv6 multicast (BSR, RP, client and server simulation).

Picture1: Lab topology

IPv6 multicast over IPv6 IPSec VTI

Lab outline

  • Routing
    • OSPFv3
    • EIGRP for IPv6
  • IPv6 IPSec
    • Using IPv6 IPSec VTI
    • Using OSPFv3 IPSec security feature
  • IPv6 Multicast
    • IPv6 PIM BSR
  • Offline lab
  • Troubleshooting cases
  • Performance testing

Routing

Note:
IPv6 Routing relies on link-local addresses, so for troubleshooting purpose, link-local IPs are configured to be similar to their respective global addresses, so they are easily recognisable. This will be of a tremendous help during troubleshooting. Otherwise you will find yourself trying to decode the matrix : )

OSPFv3

Needs an interface configured with IPv4 address for Router-id

OSPFv3 offloads security to IPv6 native IPv6, so you can secure OSPFv3 communications on purpose: per- interface or per-area basis.
  Table1: OSPFv3 configuration

  R2 R1
IPv6 routing processes need IPv4-format router ids ipv6 router ospf 12
router-id 2.2.2.2
 ipv6 router ospf 12
router-id 1.1.1.1
Announce respective LAN interfaces interface FastEthernet0/1
ipv6 ospf 12 area 22
interface FastEthernet0/1
ipv6 ospf 12 area 11 
Disable routing on the physical BTB connection to avoid RPF failure interface FastEthernet0/0
 ipv6 ospf 12 … 
interface FastEthernet0/0
 ipv6 ospf 12 …
IPv6 gateways exchange routes through the VTI encrypted interface interface Tunnel12
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
interface Tunnel12
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
Set the ospf network type on loopback interfaces if you want to advertise masks other that 128-length interface Loopback0
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
interface Loopback0
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
Table2: EIGRP for IPv6 configuration
  R2 R1
IPv6 routing processes need IPv4-format router ids ipv6 router eigrp 12
eigrp router-id 2.2.2.2
ipv6 router eigrp 12
eigrp router-id 1.1.1.1
Announce respective LAN interfaces interface FastEthernet0/1
ipv6 eigrp 12
interface FastEthernet0/1
ipv6 eigrp 12
Disable routing on the physical BTB connection to avoid RPF failure interface FastEthernet0/0
ipv6 eigrp 12
interface FastEthernet0/0
ipv6 eigrp 12
IPv6 gateways exchange routes through the VTI encrypted interface interface Tunnel12
ipv6 eigrp 12
interface Tunnel12
ipv6 eigrp 12
Set the ospf network type on loopback interfaces if you want to advertise masks other that 128-length interface Loopback0
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
interface Loopback0
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
Enable EIGRP process ipv6 router eigrp 12
no shutdown
ipv6 router eigrp 12
no shutdown

In case you want to configure EIGRP for IPv6:

– No shutdown inside EIGRP configuration mode

– Similarly to OSPFv3, we need an interface configured with IPv4 address for Router-id

IPv6 IPSec

  • Using IPv6 IPSec VTI
Table3: IPSec configuration
  R1 R2
Set the type of ISAKMP authentication crypto keyring keyring1
pre-shared-key address ipv6 2001:DB8::2/128 key cisco
crypto keyring keyring1
pre-shared-key address ipv6 2001:DB8::1/128 key cisco
  crypto isakmp key cisco address ipv6 2001:DB8::2/128 crypto isakmp key cisco address ipv6 2001:DB8::1/128
ISAKMP profile crypto isakmp policy 10
encr 3des
hash md5
authentication pre-share
lifetime 3600
crypto isakmp policy 10
encr 3des
hash md5
authentication pre-share
lifetime 3600
Transform sets: symmetric encryption and signed hash algorithms crypto ipsec transform-set 3des ah-sha-hmac esp-3des crypto ipsec transform-set 3des ah-sha-hmac esp-3des
  crypto ipsec profile profile0
set transform-set 3des
crypto ipsec profile profile0
set transform-set 3des
Tunnel mode and bind the ipsec profile interface Tunnel12
ipv6 address FE80::DB8:12:1 link-local
ipv6 address 2001:DB8:12::1/64
tunnel source FastEthernet0/0
tunnel destination 2001:DB8::2
tunnel mode ipsec ipv6
tunnel protection ipsec profile profile0
interface Tunnel12
ipv6 address FE80::DB8:12:2 link-local
ipv6 address 2001:DB8:12::2/64
tunnel source FastEthernet0/0
tunnel destination 2001:DB8::1
tunnel mode ipsec ipv6
tunnel protection ipsec profile profile0
Make sure to not advertise the routes through the physical interface to avoid RPF failures (when the source of the multicast traffic is reached from an different interface than the one provided by the RIB) interface FastEthernet0/0
ipv6 address FE80::DB8:1 link-local
ipv6 address 2001:DB8::1/64
ipv6 enable
interface FastEthernet0/0
ipv6 address FE80::DB8:2 link-local
ipv6 address 2001:DB8::2/64
ipv6 enable
 

Here is a capture of the traffic (secured) between R1 and R2 gateways

Picture2: Wireshark IPv6 IPSec trafic capture

IPv6-IPSec-VTI

What could go wrong?

– Encryption doesn’t match

– Shared key doesn’t match

– Wrong ISAKMP peers

– ACL in the path between the 2 gateways blocking gateways IPs or protocol 500

– IPSec profile no assigned to the tunnel int ( tunnel protection ipsec profile < …>)

– Ipsec Encryption and/or signed hashes don’t match.

  • Using OSPFv3 IPSec security feature

You still can use IPv6 IPSec to encrypt and authenticate only OSPF per-interface basis.

OSPFv3 will use the IPv6-enabled IP Security (IPsec) secure socket API.

R1

interface FastEthernet0/0
ipv6 ospf 12 area 0
ipv6 ospf encryption ipsec spi 256 esp 3des 123456789A123456789A123456789A123456789A12345678 md5 123456789A123456789A123456789A12

R2

interface FastEthernet0/0
ipv6 ospf 12 area 0
ipv6 ospf encryption ipsec spi 256 esp 3des 123456789A123456789A123456789A123456789A12345678 md5 123456789A123456789A123456789A12

Picture4: Wireshark traffic capture – OSPFv3 IPSec feature :

ipv6-ospf-feature

Note only OSPFv3 traffic is encrypted

IPv6 Multicast

IPv6 PIM BSR

The RP (Rendez-vous point) is the point where multicast server offer meets member’s demand.

First hop routers build (S,G) source trees with candidate RPs and register directly connected multicast sources.

Candidate- RPs announce themselves to candidate-BSRs, and the latter announce the inf. to all PIM routers.

All PIM routers looking for a particular multicast group learn Candidate RP IP addresses from BSR and build (*, G) shared trees.

Table4: Multicast configuration

  R1(candidate RP) R2(candidate BSR)
Enable multicast routing ipv6 multicast-routing ipv6 multicast-routing
R1 announced as BSR candidate ipv6 pim bsr candidate bsr 2001:DB8:10::1  
R2 announced as RP candidate   ipv6 pim bsr candidate rp 2001:DB8:20::2
Everything should be routed through the tunnel interface, to be encrypted ipv6 route ::/0 Tunnel12 FE80::DB8:12:2 ipv6 route ::/0 Tunnel12 FE80::DB8:12:1
For testing purpose, make one router join a multicast traffic and ping it from a LAN router on the other side or you can opt for more fun by running VLC on one host to read a network stream and stream a video from a host on the other side.   interface FastEthernet0/1
ipv6 mld join-group ff0E::5AB

Make sure that:

  • At least one router is manually configured as a candidate RP
  • At least one router is manually configured as a candidate BSR
During multicasting of the traffic, sll PIM routers knows about the RP and the BSR

– (*,G) shared tree is spread over PIM routers from the last hop router (connected to multicast members).

– (S,G) source tree is established between the first hop router (connected to the multicast server) and the RP.

– The idea behind IPv6 PIM BSR is the same as in IPv4; here an animation explaining the process for IPv4.

Let’s check end-to-end multicast streaming:

Before going to troubleshooting here is the offline lab with all commands:

Troubleshooting

If something doesn’t work and you are stuck, isolate the area of work and inspect each process separately step by step.

Check each step using “show…” commands, so you know each time what you are looking for to spot what is wrong.

“sh run” and script comparison technique is limited by the visual perception capability which is illusory and far from being reliable.

Common routing issues

– Make sure you have successful back-to-back connectivity everywhere.

– With EIGRP for IPv6 make sure the process is enabled.

– If routing neighbors are connected through NBMA network, make sure to enable pseudo broadcasting and manually set neighbor commands.

Common IPSec issues

– ISAKMP phase

– Wrong peer

– Wrong shared password

– Not matching isakmp profile

– IPSec phase

– Not matching ipsec profile

Common PIM issues

– If routing neighbors are connected through NBMA network, make sure C-RPs and C-BSRs are locate on the main site.

– Issue with the client: => no (*,G)

– MLD query issue with the last hop.

– Last hop PIM router cannot build the shared tree.

– Issue with RP registration  => no (S,G)

– Multicast server MLD issue with the 1st hop router

– 1st hop router cannot register with th RP.

– Issue with C-BSR candidate doesn’t advertise RP inf. to PIM routers (BSRs collect all candidate RPs and announce them to all PIM routers to choose the best RP for each group)

– Issue with C-RP candidate doesn’t announce themselves to C-BSRs (RPs announce to C-BSRs which multicast groups they are responsible for)

-RPF failure (the interface used to reach the multicast source, through RIB, is not the interface sourcing the multicast traffic)

Picture5: RPF Failure

Replace test case 6 with RPF failure (enable PIM & routing through physical int.)

Table5: troubleshooting cases
Case Description Simulated wrong configuration Correct configuration
ISAKMP policy, encryption key mismatch crypto isakmp policy 10
encr aes
crypto isakmp policy 10
encr 3des
2 ISAKMP policy, Hash algorithm mismatch crypto isakmp policy 10
Hash sha
crypto isakmp policy 10
Hash md5 
3 Wrong ISAKMP peer crypto isakmp key cisco address ipv6 2001:DB8::3/128 crypto isakmp key cisco address ipv6 2001:DB8::2/128 
4 Wrong ISAKMP key crypto isakmp key cisco1 address ipv6 2001:DB8::2/128 crypto isakmp key cisco address ipv6 2001:DB8::2/128 
5 Wrong tunnel destination interface Tunnel12
tunnel destination 2001:DB8::3
interface Tunnel12
tunnel destination 2001:DB8::2 
6 Wrong tunnel source interface Tunnel12
tunnel source FastEthernet0/1
interface Tunnel12
tunnel source FastEthernet0/0
 

For more details about each case, refer to the offline lab below, you will find an extensive coverage of all important commands along with debug for each case:

Performance testing

Three cases are tested: multicast traffic between R1 and R2 is routed through:

– Physical interfaces (serial connection): MTU=1500 bytes

– IPv6 GRE: MTU=1456 bytes

– IPv6 IPSec VTI: MTU=1391 bytes

The following tests are performed using iperf in GNS3 lab environment, so results are to keep relative.

Picture6: Iperf testing

perfs

References

http://www.faqs.org/rfcs/rfc6226.html

http://tools.ietf.org/html/rfc5059

http://www.cisco.com/en/US/docs/ios/ipv6/configuration/guide/ip6-multicast.html#wp1055997

https://supportforums.cisco.com/docs/DOC-27971

http://www.cisco.com/en/US/docs/ios/ipv6/configuration/guide/ip6-tunnel_external_docbase_0900e4b1805a3c71_4container_external_docbase_0900e4b181b83f78.html

http://www.cisco.com/web/learning/le21/le39/docs/TDW_112_Prezo.pdf

http://networklessons.com/multicast/ipv6-pim-mld-example/

http://www.gogo6.com/profiles/blogs/ietf-discusses-deprecating-ipv6-fragments

http://tools.ietf.org/html/draft-taylor-v6ops-fragdrop-01

https://datatracker.ietf.org/doc/draft-bonica-6man-frag-deprecate

http://blog.initialdraft.com/archives/1648/

IPv6 Embedded RP


This tutorial treats IPv6 embedded-RP method, how it works and how it can optimize the deployment of IPv6 multicast.

Multicast mtrace command


This short post illustrates how the cisco command “mtrace” works.

Let’s consider the following topology  :

Multicast topology with successful RPF check.

With the multicast source at 10.0.0.1 and a multicast member at 20.0.0.1 requesting multicast content from 10.0.0.1.

10.0.0.1 and 20.0.0.1 can reach each other ONLY through R5-R2-R4 which also enabled for PIM.

According to : http://www.cisco.com/en/US/docs/ios/12_3/ipmulti/command/reference/ip3_m1g.html#wp1068645

Usage Guidelines

The trace request generated by the mtrace command is multicast to the multicast group to find the last hop router to the specified destination. The trace then follows the multicast path from destination to source by passing the mtrace request packet via unicast to each hop. Responses are unicast to the querying router by the first hop router to the source. This command allows you to isolate multicast routing failures.

mtrace

Let’s execute mtrace command from R1 :

mtrace 10.0.0.1 20.0.0.1 239.1.1.1

This will send multicast traffic from 10.0.0.1(multicast source) to 20.0.0.1 (multicast member) and then query back the source starting from the last hop router (directly connected to the member)

R1#mtrace 10.0.0.1 20.0.0.1 239.1.1.1Type escape sequence to abort.Mtrace from 10.0.0.1 to 20.0.0.1 via group 239.1.1.1From source (?) to destination (?)Querying full reverse path…0  20.0.0.1  ————> where mtrace started tracing back toward the multicast traffic source 10.0.0.1

-1  192.168.24.5 PIM  [10.0.0.0/24]  ————> 1st hop toward the multicast traffic source 10.0.0.1

-2  192.168.24.6 PIM  [10.0.0.0/24]  ————> 2nd hop toward the multicast traffic source 10.0.0.1

-3  192.168.52.5 PIM  [10.0.0.0/24]  ————> 3rd hop toward the multicast traffic source 10.0.0.1

-4  192.168.15.6 PIM  [10.0.0.0/24]  ————> last hop to which the multicast traffic source is connected

-5  10.0.0.1 ————> multicast traffic source itself

R1#

R4->R2->R5->R1 is the unicast traffic path

R1->R5->R2->R4 is the multicast traffic path.

This in an indication that the RPF check succeed!

We can also use ping <mgroup> that will generate the multicast traffic toward the multicast member and the result shows confirmation from the multicast member for receiving the multicast traffic.

R1#ping 239.1.1.1 source 10.0.0.1 repeat 9999999Type escape sequence to abort.Sending 9999999, 100-byte ICMP Echos to 239.1.1.1, timeout is 2 seconds:Packet sent with a source address of 10.0.0.1Reply to request 0 from 192.168.24.5, 88 ms   ——-> last hop Multicast router is confirming the reception

Reply to request 0 from 192.168.24.5, 92 ms  –——> last hop Multicast router is confirming the reception

Reply to request 1 from 192.168.24.5, 108 ms  ——-> last hop Multicast router  is confirming the reception

Reply to request 1 from 192.168.24.5, 108 ms  ——-> last hop Multicast router  is confirming the reception

Here is the result of mtrace in both cases

Here is  two ad-hoc explanatory videos illustrating both cases of successful and failed RPF check :

Case1 : unicast path = multicast path (successful RPF)

R1#mtrace 10.0.0.1 20.0.0.1 239.1.1.1Type escape sequence to abort.Mtrace from 10.0.0.1 to 20.0.0.1 via group 239.1.1.1From source (?) to destination (?)Querying full reverse path…

0  20.0.0.1

-1  192.168.24.5 PIM  [10.0.0.0/24]

-2  192.168.24.6 PIM  [10.0.0.0/24]

-3  192.168.52.5 PIM  [10.0.0.0/24]

-4  192.168.15.6 PIM  [10.0.0.0/24]

-5  10.0.0.1

R1#

Case2 : unicast path # multicast path (RPF failure)

R1#mtrace 10.0.0.1 20.0.0.1 239.1.1.1Type escape sequence to abort.Mtrace from 10.0.0.1 to 20.0.0.1 via group 239.1.1.1From source (?) to destination (?)

Querying full reverse path…

0  20.0.0.1

-1  192.168.24.5 None No route

R1#

the resulting traceroute request will report all RPF successes in the path back to the multicast source and stop there where an RPF failure.

MSDP and Inter-domain multicast


So far we have seen that PIM (Protocol Independent Multicast) can perfectly satisfy the need for multicast forwarding inside a single domain or autonomous system, which is not the case with multicast applications intended to provide services outside the boundary of a single autonomous system, here comes the role of protocols such MSDP (Multicast Source Discovery Protocol).

With PIM-SM a typical multicast framework inside a single domain is composed of one or more Rendez-Vous points, multicast sources and multicast receivers. Now let’s imagine a company specialized in Video-On Demand content with receivers across Internet, with a typical multicast framework inside each AS to be as close as possible to receivers and let’s suppose that multicast sources in one AS is no more available, and we know that RP is responsible for registering sources and linking them to receivers, so what if an RP in one AS can communicate with the RP in another AS and be able to register with its sources?

Well that’s exactly what MSDP is intended for: make multicast sources available for other AS receivers by communicating this information between RP in different autonomous systems.

In this lab two simplified autonomous systems are considered AS27011 and AS27022 with an RP and multicast source, serving the same group, in each(Figure1).

Figure1: Topology

The scenario is as follow:

  • First, R22 the multicast source is sending multicast traffic 224.1.1.1 to R2 with RP2 as the rendez-vous point inside AS27022.
  • Second, R22 stop sending the multicast traffic and R1 in AS27011 start sending the same multicast group 224.1.1.1

To successfully deploy MSDP (or any other technology or protocol) it is crucial to split the work into several steps and make sure that each step is working perfectly, this will dramatically reduce the time that would be spent troubleshooting issues accumulated with each layer.

  • Basic connectivity & reachability
  • Routing protocol: BGP configuration
  • multicast configuration
  • MSDP configuration
  1. Basic connectivity & reachability

Let’s start by configuring IGP (EIGRP) to insure connectivity between devices:

R1:

router eigrp 10
network 192.168.111.0

no auto-summary

RP1:

router eigrp 10

passive-interface Ethernet0/1

network 1.1.1.3 0.0.0.0

network 172.16.0.1 0.0.0.0

network 192.168.111.0

network 192.168.221.0

no auto-summary

RP2:

router eigrp 10

passive-interface Ethernet0/1

network 1.1.1.2 0.0.0.0

network 172.16.0.2 0.0.0.0

network 192.168.221.0

network 192.168.222.0

network 192.168.223.0

no auto-summary

IGP routing information should not leak between ASs, hence the need to set interfaces between ASs as passive so only networks carried by BGP will be reachable between AS27022 and AS27011.

R22:

router eigrp 10
network 22.0.0.0

network 192.168.223.0

no auto-summary

R2:

router eigrp 10
network 192.168.222.0

no auto-summary

  1. Routing protocol: BGP configuration

     

Do not forget to advertise multicast end-point subnets through BGP (10.10.10.0/24, 22.0.0.0/24 and 192.168.40.0/24) so they can be reachable between the two autonomous systems.

R1:

router bgp 27011
no synchronization

bgp log-neighbor-changes

network 10.10.10.0 mask 255.255.255.0

neighbor 192.168.111.11 remote-as 27011

no auto-summary

RP1:

router bgp 27011
no synchronization

bgp log-neighbor-changes

neighbor 1.1.1.2 remote-as 27022

neighbor 1.1.1.2 ebgp-multihop 2

neighbor 1.1.1.2 update-source Loopback0

neighbor 192.168.111.1 remote-as 27011

no auto-summary

ip route 1.1.1.2 255.255.255.255 192.168.221.22

RP2:

router bgp 27022
no synchronization

bgp log-neighbor-changes

neighbor 1.1.1.3 remote-as 27011

neighbor 1.1.1.3 ebgp-multihop 2

neighbor 1.1.1.3 update-source Loopback0

neighbor 192.168.222.2 remote-as 27022

neighbor 192.168.222.2 route-reflector-client

neighbor 192.168.223.33 remote-as 27022

neighbor 192.168.223.33 route-reflector-client

no auto-summary

ip route 1.1.1.3 255.255.255.255 192.168.221.11

eBGP is configured between loopback interfaces to match MSDP peer relationship, therefore static routes are added in both sides to reach those loopback interfaces.

R22:

router bgp 27022
no synchronization

network 22.0.0.0 mask 255.255.255.0

neighbor 192.168.223.22 remote-as 27022

no auto-summary

R2:

router bgp 27022
no synchronization

network 192.168.40.0

neighbor 192.168.222.22 remote-as 27022

no auto-summary

There is three methods to configure iBGP in AS 27022:

– enable BGP only on the border router RP2 and redistribute needed subnets into BGP (not straightforward).

– configure full mesh iBGP (not consistent with the phyisical topology which is linear).

– configure RP1 as Route reflector.

let’s retain the last option the most optimal in the current situation, whenever this option is possible you better start by considering it to allow more flexibility for future growth of your network, otherwise when things become more complicated you will have to reconfigure BGP from the scratch to use Route Reflector.

Monitoring:

R2:

R2#sh ip bgp
BGP table version is 10, local router ID is 1.1.1.4

Status codes: s suppressed, d damped, h history, * valid, > best, i – internal,

r RIB-failure, S Stale

Origin codes: i – IGP, e – EGP, ? – incomplete

Network Next Hop Metric LocPrf Weight Path

*>i10.10.10.0/24 192.168.221.11 0 100 0 27011 i

*>i22.0.0.0/24 192.168.223.33 0 100 0 i

*> 192.168.40.0 0.0.0.0 0 32768 i

R2#

R22:

R22(config-router)#do sh ip bgp
BGP table version is 8, local router ID is 22.0.0.1

Status codes: s suppressed, d damped, h history, * valid, > best, i – internal,

r RIB-failure, S Stale

Origin codes: i – IGP, e – EGP, ? – incomplete

Network Next Hop Metric LocPrf Weight Path

*>i10.10.10.0/24 192.168.221.11 0 100 0 27011 i

*> 22.0.0.0/24 0.0.0.0 0 32768 i

*>i192.168.40.0 192.168.222.2 0 100 0 i

R22(config-router)#

R1:

R1#sh ip bgp
BGP table version is 10, local router ID is 192.168.111.1

Status codes: s suppressed, d damped, h history, * valid, > best, i – internal,

r RIB-failure, S Stale

Origin codes: i – IGP, e – EGP, ? – incomplete

Network Next Hop Metric LocPrf Weight Path

*> 10.10.10.0/24 0.0.0.0 0 32768 i

*>i22.0.0.0/24 192.168.221.22 0 100 0 27022 i

*>i192.168.40.0 192.168.221.22 0 100 0 27022 i

R1#

  1. multicast configuration

R1:

ip multicast-routing
interface Ethernet0/0

ip pim sparse-dense-mode

interface Serial1/0

ip pim sparse-dense-mode

RP1:

ip multicast-routing
interface Loopback1

ip pim sparse-dense-mode

interface Ethernet0/1

ip pim sparse-dense-mode

ip pim send-rp-announce Loopback1 scope 64

ip pim send-rp-discovery scope 64

RP1 router is configured as the RP and mapping agent for the AS27011

RP2:

interface Loopback0
ip pim sparse-dense-mode

interface Ethernet0/1

ip pim sparse-dense-mode

interface Ethernet0/2

ip pim sparse-dense-mode

ip pim send-rp-announce Loopback1 scope 64

ip pim send-rp-discovery scope 64

RP2 router is configured as the RP and mapping agent for the AS27022

R2:

interface Ethernet0/0
ip pim sparse-dense-mode


ip igmp join-group 224.1.1.1

interface Serial1/0

ip pim sparse-dense-mode

Auto-RP is chosen to advertise RP information throughout all interfaces where PIM sparse-dense mode is enabled… including through the link between autonomous systems, this mean that group-to-RP mapping information will be advertised to other ASs PIM routers which can lead to confusion and the result is that a PIM router in one AS will receive information from its local RP telling that it is the RP responsible for a number of groups as well information from external RP announcing their information:

R2#
*Mar 1 02:58:01.315: Auto-RP(0): Received RP-discovery, from 192.168.221.11, RP_cnt 1, ht 181

*Mar 1 02:58:01.319: Auto-RP(0): Update (224.0.0.0/4, RP:172.16.0.1), PIMv2 v1

R2#

RP2(config-if)#

*Mar 1 02:44:05.615: %PIM-6-INVALID_RP_JOIN: Received (*, 224.1.1.1) Join from 0.0.0.0 for invalid RP 172.16.0.1

RP2(config-if)#

And this is not the kind of cooperation intended by MSDP, MSDP allow RPs on one AS to contact multicast sources in other AS, but still responsible for multicast forwarding inside its AS.

The solution is to block service groups 224.0.1.39 and 224.0.1.40 between the two ASs using multicast boundary filtering:

RP1:

access-list 10 deny
224.0.1.39
access-list 10 deny
224.0.1.40

access-list 10 permit any

interface Ethernet0/1


ip multicast boundary 10

RP2:

access-list 10 deny
224.0.1.39
access-list 10 deny
224.0.1.40

access-list 10 permit any

interface Ethernet0/1


ip multicast boundary 10

Multicast monitoring inside AS:

RP2(config)#
*Mar 1 03:26:17.731: Auto-RP(0): Build RP-Announce for
172.16.0.2, PIMv2/v1, ttl 64, ht 181

*Mar 1 03:26:17.735: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 03:26:17.739: Auto-RP(0): Send RP-Announce packet on Ethernet0/2

*Mar 1 03:26:17.743: Auto-RP(0): Send RP-Announce packet on
Serial1/0

*Mar 1 03:26:17.747: Auto-RP: Send RP-Announce packet on Loopback1

*Mar 1 03:26:17.747: Auto-RP(0): Received RP-announce, from 172.16.0.2, RP_cnt 1, ht 181

*Mar 1 03:26:17.751: Auto-RP(0): Added with (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1

*Mar 1 03:26:17.783: Auto-RP(0): Build RP-Discovery packet

RP2(config)#

RP2(config)#

*Mar 1 03:26:17.783: Auto-RP: Build mapping (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1,

*Mar 1 03:26:17.791: Auto-RP(0): Send RP-discovery packet on Ethernet0/2 (1 RP entries)

*Mar 1 03:26:17.799: Auto-RP(0): Send RP-discovery packet on Serial1/0 (1 RP entries)

RP2(config)#

RP2 the RP forAS27022 is properly announcing itself to the mapping agent (the same router) which in turn properly announcing RP to local PIM routers R22 and R2:

R22:
R22#

*Mar 1 03:26:24.339: Auto-RP(0): Received RP-discovery, from 192.168.223.22, RP_cnt 1, ht 181

*Mar 1 03:26:24.347: Auto-RP(0): Added with (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1

R22#

R22#sh ip pim rp mapp

PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4


RP 172.16.0.2 (?), v2v1

Info source: 192.168.223.22 (?), elected via Auto-RP

Uptime: 00:04:16, expires: 00:02:41

R22#

R2:
R2#

*Mar 1 03:27:14.175: Auto-RP(0): Received RP-discovery, from 192.168.222.22, RP_cnt 1, ht 181

*Mar 1 03:27:14.179: Auto-RP(0): Update (224.0.0.0/4, RP:172.16.0.2), PIMv2 v1

R2#

R2#sh ip pim rp mapp

PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4


RP 172.16.0.2 (?), v2v1

Info source: 192.168.222.22 (?), elected via Auto-RP

Uptime: 00:09:22, expires: 00:02:37

R2#

  1. MSDP configuration

RP1:

ip msdp peer 1.1.1.2 connect-source Loopback0

RP2:

ip msdp peer 1.1.1.3 connect-source Loopback0

interface loo0 doesn’t need PIM to be enabled on it.

The MSDP peer ID have to match eBGP peer ID

RP1:

RP1#sh ip msdp summ
MSDP Peer Status Summary

Peer Address AS State Uptime/ Reset SA Peer Name

Downtime Count Count

1.1.1.2
27022 Up 01:14:58 0 0 ?

RP1#

RP1(config)#do sh ip bgp summ

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

1.1.1.2 4 27022 79 79 4 0 0 01:15:46 2

192.168.111.1 4 27011 80 80 4 0 0 01:16:03 1

RP1#

RP2:

RP2#sh ip msdp sum
MSDP Peer Status Summary

Peer Address AS State Uptime/ Reset SA Peer Name

Downtime Count Count

1.1.1.3
27011 Up 01:15:48 0 2 ?

RP2#

RP2(config)#

RP2(config)#do sh ip bgp sum

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd

1.1.1.3 4 27011 80 80 4 0 0 01:16:33 1

192.168.222.2 4 27022 81 82 4 0 0 01:16:43 1

192.168.223.33 4 27022 81 82 4 0 0 01:16:44 1

RP2#

R22#ping
Protocol [ip]:

Target IP address: 224.1.1.1

Repeat count [1]: 1000000

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Interface [All]: Ethernet0/0

Time to live [255]:

Source address: Loopback2

Type of service [0]:

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 1000000, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Packet sent with a source address of 22.0.0.1

Reply to request 0 from 192.168.222.2, 216 ms

Reply to request 1 from 192.168.222.2, 200 ms

Reply to request 2 from 192.168.222.2, 152 ms

Reply to request 3 from 192.168.222.2, 136 ms

Reply to request 4 from 192.168.222.2, 200 ms

Reply to request 5 from 192.168.222.2, 216 ms

After generating a multicast routing from R22 loo2 interface to the group 224.1.1.1 you can note from the result of the previous extended ping that R2 is responding to it.

RP2#sh ip mroute 224.1.1.1
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:17:35/00:02:38, RP 172.16.0.2, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:


Serial1/0, Forward/Sparse-Dense, 00:17:35/00:02:38

(22.0.0.1, 224.1.1.1), 00:05:57/00:03:28, flags: T


Incoming interface: Ethernet0/2, RPF nbr 192.168.223.33

Outgoing interface list:


Serial1/0, Forward/Sparse-Dense, 00:05:57/00:03:28

RP2#

RP on AS27022 has already built the shared tree with the receiver (*, 224.1.1.1) and registered for the source tree with PIM-DR sending router (22.0.0.1, 224.1.1.1).

Note that the RPF interface that is used to reach the source 22.0.0.1 is e0/2

Now let’s suppose that for some reasons the source R22 stopped sending the multicast group to 224.1.1.1 and in the neighbor AS 27022 a source begin to send multicast traffic to the same group.

RP2#*Mar 1 01:32:42.051: MSDP(0): Received 32-byte TCP segment from 1.1.1.3

*Mar 1 01:32:42.055: MSDP(0): Append 32 bytes to 0-byte msg 102
from 1.1.1.3, qs 1

RP2#

RP2 msdp has received SA message from the MSDP peer at RP1 to inform it about its local source, the group and RP as show in the following output:

RP2#sh ip msdp saMSDP Source-Active Cache – 2 entries

(10.10.10.3, 224.1.1.1), RP 172.16.0.1, BGP/AS 0, 00:26:31/00:05:46, Peer 1.1.1.3

(192.168.111.1, 224.1.1.1), RP 172.16.0.1, BGP/AS 0, 00:26:31/00:05:46, Peer 1.1.1.3

RP2#

RP2#sh ip mroute 224.1.1.1
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 01:34:23/00:03:25, RP 172.16.0.2, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/0, Forward/Sparse-Dense, 01:34:23/00:03:25

(10.10.10.3, 224.1.1.1), 00:30:13/00:03:21, flags: MT


Incoming interface: Ethernet0/1, RPF nbr 192.168.221.11

Outgoing interface list:


Serial1/0, Forward/Sparse-Dense, 00:30:13/00:03:25

(192.168.111.1, 224.1.1.1), 00:30:13/00:03:21, flags:

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial1/0, Forward/Sparse-Dense, 00:30:13/00:03:24

RP2#

Note the new entry for the group 224.1.1.1 which is (10.10.10.3, 224.1.1.1) flagged as “M” for MSDP created entry and “T” telling that packets has been received on this SPT.

The incoming interface connect the RPF neighbor RP2 towards the source 10.10.10.3 and the outgoing interface send traffic to R2.

RP1#sh ip mroute 224.1.1.1
IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:34:29/stopped, RP 172.16.0.1, flags: SP

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list: Null

(10.10.10.3, 224.1.1.1), 00:31:27/00:02:25, flags: TA


Incoming interface: Serial1/0, RPF nbr 192.168.111.1

Outgoing interface list:


Ethernet0/1, Forward/Sparse-Dense, 00:30:31/00:02:50

(192.168.111.1, 224.1.1.1), 00:34:29/00:02:55, flags: PTA

Incoming interface: Serial1/0, RPF nbr 0.0.0.0

Outgoing interface list: Null

RP1#

RP1#

the entry (10.10.10.3, 224.1.1.1) has serial1/0 as incoming interface connected to the RPF neighbor which is the source R1 and forward the traffic to AS27022 to RP2.

“T” for packets has been received on this entry and the RP (MSDP) consider this SPT to be a candidate for MSDP advertisement to other AS.

Multicast over FR NBMA part4 – (multipoint GRE and DMVPN)


This is the fourth part of the document “Multicast over FR NBMA”, this lab focus on deploying multicast over multipoint GRE and DMVPN.

The main advantage of GRE tunneling is its transportation capability, non-ip, broadcast and multicast traffic can be encapsulated inside the unicast GRE which is easily transmitted over Layer2 technologies such Frame Relay and ATM.

Because HUB, SpokeA and SpokeB FR interfaces are in multipoint, we will use multipoint GRE.

Figure1 : lab topology


CONFIGURATION

mGRE configuration:

HUB:

interface Tunnel0
ip address 172.16.0.1 255.255.0.0
no ip redirects

!!PIM sparse-dense mode is enabled on the tunnel not on the physical interface


ip pim sparse-dense-mode

!! a shared key is used for tunnel authentication


ip nhrp authentication cisco

!!The HUB must send all multicast traffic to all spokes that has registered to it


ip nhrp map multicast dynamic

!! Enable NHRP on the interface, must be the same for all participants


ip nhrp network-id 1

!!Because the OSPF network type is broadcast a DR will be elected, so the HUB is assigned the biggest priority to be sure that it will be the DR


ip ospf network broadcast


ip ospf priority 10

!! With small HUB and Spoke networks it is possible to configure static mGRE by pre-configuring the tunnel destination, but will not be able to set the tunnel mode


tunnel source Serial0/0


tunnel mode gre multipoint

!! Set the tunnel identification key and must be identical to the network-id previously configured


tunnel key 1

FR configuration:

interface Serial0/0
ip address 192.168.100.1 255.255.255.0
encapsulation frame-relay

serial restart-delay 0

frame-relay map ip 192.168.100.2 101
broadcast

frame-relay map ip 192.168.100.3 103
broadcast

no frame-relay inverse-arp

Routing configuration:

router ospf 10
router-id 1.1.1.1
network 10.10.20.0 0.0.0.255 area 100


network 172.16.0.0 0.0.255.255 area 0

SpokeA:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.2 255.255.0.0
ip nhrp authentication cisco

!!All multicast traffic will be forwarded to the NBMA next hop IP (HUB).

ip nhrp map multicast 192.168.100.1

!!All spokes know in advance the HUB NBMA and tunnel IP addresses which are static.

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network point-to-multipoint

tunnel source Serial0/0.201

tunnel destination 192.168.100.1

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.201 multipoint

ip address 192.168.100.2 255.255.255.0

frame-relay map ip 192.168.100.1 201 broadcast

Routing configuration:

router ospf 10
router-id 200.200.200.200
network 20.20.20.0 0.0.0.255 area 200

network 172.16.0.0 0.0.255.255 area 0

SpokeB:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.3 255.255.0.0
no ip redirects

ip pim sparse-dense-mode

ip nhrp authentication cisco

ip nhrp map multicast 192.168.100.1

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network broadcast

ip ospf priority 0

tunnel source Serial0/0.301

tunnel mode gre multipoint

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.301 multipoint

ip address 192.168.100.3 255.255.255.0

frame-relay map ip 192.168.100.1 301 broadcast

Routing configuration:

router ospf 10
router-id 3.3.3.3

network 172.16.0.0 0.0.255.255 area 0

network 192.168.39.0 0.0.0.255 area 300

RP (SpokeBnet):

interface Loopback0
ip address 192.168.38.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 192.168.38.1 0.0.0.0 area 300

ip pim send-rp-announce Loopback0 scope 32

Mapping Agent (HUBnet):

interface Loopback0
ip address 10.0.0.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 10.0.0.1 0.0.0.0 area 100

ip pim send-rp-discovery Loopback0 scope 32

Here is the result:

HUB:

HUB# sh ip nhrp
172.16.0.2/32 via 172.16.0.2, Tunnel0 created 01:06:52, expire 01:34:23
Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.2

172.16.0.3/32 via 172.16.0.3, Tunnel0 created 01:06:35, expire 01:34:10

Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.3

HUB#

The HUB has dynamically learnt spoke’s NBMA addresses and corresponding tunnel ip addresses.

HUB#sh ip route
Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is not set

 

1.0.0.0/32 is subnetted, 1 subnets

C 1.1.1.1 is directly connected, Loopback0

20.0.0.0/32 is subnetted, 1 subnets

O IA 20.20.20.20 [110/11112] via 172.16.0.2, 01:08:26, Tunnel0

O IA 192.168.40.0/24 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

C 172.16.0.0/16 is directly connected, Tunnel0

192.168.38.0/32 is subnetted, 1 subnets

O IA 192.168.38.1 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

O IA 192.168.39.0/24 [110/11112] via 172.16.0.3, 01:08:26, Tunnel0

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O 10.0.0.2/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.10.10.0/24 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.0.0.1/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

C 10.10.20.0/24 is directly connected, FastEthernet1/0

C 192.168.100.0/24 is directly connected, Serial0/0

HUB#

The HUB has learnt all spokes local network ip addresses; note that all learnt routes points to the tunnel IP addresses, because the routing protocol is enabled on the top of the logical topology not the physical (figure2).

Figure2 : Logical topology

HUB#sh ip pim neighbors
PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR

Address Prio/Mode

172.16.0.3
Tunnel0 01:06:17/00:01:21 v2 1 / DR S

172.16.0.2
Tunnel0 01:06:03/00:01:40 v2 1 / S

10.10.20.3 FastEthernet1/0 01:07:24/00:01:15 v2 1 / DR S

HUB#

PIM neighbor relationships are established after enabling PIM-Sparse-dense mode on tunnel interfaces.

SpokeBnet#
*Mar 1 01:16:22.055: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181
*Mar 1 01:16:22.059: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

*Mar 1 01:16:22.067: Auto-RP: Send RP-Announce packet on Loopback0

SpokeBnet#

The RP (SpokeBnet) send RP-announces to all those who listen to 224.0.1.39

Hubnet#
*Mar 1 01:16:17.039: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181
*Mar 1 01:16:17.043: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

Hubnet#

*Mar 1 01:16:49.267: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:16:49.271: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

Hubnet#

HUBnet, the mapping agent (MA), listening to 224.0.1.39, has received RP-announces from the RP (SpokeBnet), has updated its records and has sent RP-Discovery to all PIM-SM routers at 224.0.1.40

HUB#
*Mar 1 01:16:47.059: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181
*Mar 1 01:16:47.063: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

HUB#

 

HUB#sh ip pim rp
Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 01:11:49, expires 00:02:44
HUB#

The HUB, as an example, has received the RP-to-group mapping information from the Mapping agent and now know the RP IP address.

Now let’s take a look at the multicast routing table of the RP:

SpokeBnet#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:39:00/stopped, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(10.10.10.1, 239.255.1.1), 00:39:00/00:02:58, flags: T


Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1


Outgoing interface list:


FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(*, 224.0.1.39), 01:24:31/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(192.168.38.1, 224.0.1.39), 01:24:31/00:02:28, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(*, 224.0.1.40), 01:25:42/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:25:40/00:00:00

Loopback0, Forward/Sparse-Dense, 01:25:42/00:00:00

 

(10.0.0.1, 224.0.1.40), 01:23:39/00:02:51, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 01:23:39/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – The shared tree, rooted at the RP, used to push multicast traffic to receivers, “J” flag indicates that traffic has switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – SPT used to forward traffic from the source to the receiver, receive traffic on Fa0/0 ans forward it out of Fa1/0.

(*, 224.0.1.39) and (*, 224.0.1.40) – service group multicast, because it is a PIM sparse-dense mode, traffic for these groups were forwarded to all PIM routers using dense mode, hence the flag “D”.

This way we configured multicast over NBMA using mGRE, no layer2, no restrictions.

By the way, we are just one step far from DMVPN 🙂 all we have to do is configure IPSec VPN that will protect our mGRE tunnel, so let’s do it!

!! IKE phase I parameters
crypto isakmp policy 1
!! 3des as the encryption algorithm

encryption 3des

!! authentication type: simple preshared keys

authentication pre-share

!! Diffie Helman group2 for the exchange of the secret key

group 2

!! isakmp pees are not set because the HUB doesn’t know them yet, they are learned dynamically by NHRP within mGRE

crypto isakmp key cisco address 0.0.0.0 0.0.0.0

crypto ipsec transform-set MyESP-3DES-SHA esp-3des esp-sha-hmac

mode transport

crypto ipsec profile My_profile

set transform-set MyESP-3DES-SHA

 

int tunnel 0

tunnel protection ipsec profile My_profile

 

HUB#sh crypto isakmp sa
dst src state conn-id slot status
192.168.100.1 192.168.100.2 QM_IDLE 2 0 ACTIVE

192.168.100.1 192.168.100.3 QM_IDLE 1 0 ACTIVE

 

HUB#

 

HUB#sh crypto ipsec sa
 interface: Tunnel0

Crypto map tag: Tunnel0-head-0, local addr 192.168.100.1

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.2/255.255.255.255/47/0)


current_peer 192.168.100.2 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1248, #pkts encrypt: 1248, #pkts digest: 1248

#pkts decaps: 129, #pkts decrypt: 129, #pkts verify: 129

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 52, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.2

path mtu 1500, ip mtu 1500

current outbound spi: 0xCEFE3AC2(3472767682)

 

inbound esp sas:


spi: 0x852C8AE0(2234288864)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2003, flow_id: SW:3, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4448676/3482)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xCEFE3AC2(3472767682)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2004, flow_id: SW:4, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4447841/3479)

IV size: 8 bytes

replay detection support: Y

Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.3/255.255.255.255/47/0)

current_peer 192.168.100.3 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1309, #pkts encrypt: 1309, #pkts digest: 1309

#pkts decaps: 23, #pkts decrypt: 23, #pkts verify: 23

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 26, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.3

path mtu 1500, ip mtu 1500

current outbound spi: 0xD5D509D2(3587508690)

 

inbound esp sas:


spi: 0x4507681A(1158113306)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2001, flow_id: SW:1, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4588768/3477)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xD5D509D2(3587508690)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2002, flow_id: SW:2, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4587889/3476)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

HUB#

ISAKMP and IPSec phases are successfully established and security associations are formed.

multicast over DMVPN works perfectly! That’s it!

Multicast over FR NBMA part3 – (PIM-NBMA mode and Auto-RP)


In this third part of the document “Multicast over FR NBMA” we will see how with an RP in one spoke network and MA in the central site we can defeat the issue of Dense mode and forwarding service groups 224.0.1.39 and 224.0.1.40 from one spoke to another.

Such placement of the MA in the central site as a proxy, is ideal to insure the communication between RP and all spokes through separated PVCs.

In this LAB the RP is configured in SpokeBnet (SpokeB site) and the mapping agent in Hubnet (central site).

Figure1: Lab topology


CONFIGURATION

!! All interface PIM mode on all routers are set to “sparse-dense”

ip pim sparse-dense-mode

Configure SpokeBnet to become an RP

pokeBnet(config)#ip pim send-rp-announce Loopback0 scope 32

Configure Hubnet to become a mapping agent

Hubnet(config)#ip pim send-rp-discovery loo0 scope 32

And the result across FR is as follow:

SpokeBnet:

*Mar 1 01:02:03.083: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181

*Mar 1 01:02:03.087: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:02:03.091: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:02:03.091: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

The RP announce itself to all mapping agents in the network (the multicast group 224.0.1.39)

HUBnet:

Hubnet(config)#

*Mar 1 01:01:01.487: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181

*Mar 1 01:01:01.491: Auto-RP(0): Added with (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

*Mar 1 01:01:01.523: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:01:01.523: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:01:01.527: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:01:01.535: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

*Mar 1 01:01:01.539: Auto-RP: Send RP-discovery packet on Loopback0 (1 RP entries)

Hubnet(config)#

The mapping agent has received RP-announces from RP, update its records and send the Group-to-RP information (in this case: RP is responsible for all multicast groups) to the destination multicast 224.0.1.40 (all PIM routers)

SpokeA#

*Mar 1 01:01:53.379: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181

*Mar 1 01:01:53.383: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

SpokeA#

 

SpokeA#sh ip pim rp

Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 00:18:18, expires 00:02:32

SpokeA#

As an example SpokeA across the HUB and FR cloud has received the Group-to-RP mapping information and updates its records.

SpokeBnet#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:35:35/00:03:25, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:06:59/00:02:39

FastEthernet0/0, Forward/Sparse-Dense, 00:35:35/00:03:25

 

(10.10.10.1, 239.255.1.1), 00:03:43/00:02:47, flags: T

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:03:43/00:02:39

 

(*, 224.0.1.39), 00:41:36/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 00:41:36/00:00:00

 

(192.168.38.1, 224.0.1.39), 00:41:36/00:02:23, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 00:41:37/00:00:00

 

(*, 224.0.1.40), 01:03:55/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:03:50/00:00:00

FastEthernet1/0, Forward/Sparse-Dense, 01:03:55/00:00:00

 

(10.0.0.1, 224.0.1.40), 00:35:36/00:02:07, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:35:36/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – This is the shared tree built between the receiver and rooted at the RP (incoming interface=null), the flag “J” indicates that the traffic was switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – is the SPT in question receiving traffic through Fa0/0 and forwarding it to Fa1/0.

Both (*, 224.0.1.39) and (*.224.0.1.40) are flagged with “D” which means that they were multicasted using dense mode, this is the preliminary operation of auto-RP.

SpokeBnet#mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM Reached RP/Core [10.10.10.0/24]

-2 192.168.39.1 PIM [10.10.10.0/24]

-3 192.168.100.1 PIM [10.10.10.0/24]

-4 10.10.20.3 PIM [10.10.10.0/24]

SpokeBnet#

ClientB is receiving the multicast traffic as needed.

The following outputs show how SpokeA networks also receive perfectly the multicast traffic 239.255.1.1:

SpokeA# mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

-3 10.10.20.3 PIM [10.10.10.0/24]

-4 10.10.10.1

SpokeA#

 

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:59:33/stopped, RP 192.168.38.1, flags: SJCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:59:33/00:02:01

 

(10.10.10.1, 239.255.1.1), 00:03:23/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 00:03:23/00:02:01

 

(*, 224.0.1.40), 00:59:33/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial0/0.201, Forward/Sparse-Dense, 00:59:33/00:00:00

 

(10.0.0.1, 224.0.1.40), 00:44:18/00:02:15, flags: PLTX

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list: Null

 

SpokeA#

One more time Auto-RP in action: PIM has switched from RPT (*, 239.255.1.1) – “J” to SPT (10.10.10.1, 239.255.1.1) – “TJ”

Multicast over FR NBMA part2 – (PIM NBMA mode and static RP)


This is the second part of the document “multicast over FR NBMA”, this lab focus on the deployment of PIM NBMA mode in a HUB and Spoke FR NBMA network with static Rendez-vous Point configuration.

Figure1 illustrates the Lab topology used, the HUB is connected to the FR NBMA through the main physical interface, SpokeA and SpokeB are connected through multipoint sub-interface, the FR NBMA is a partial mesh topology with two PVCs from each spoke to the HUB main physical interface.

Figure1 : Lab topology

I – SCENARIO I

PIM-Sparse mode is used with Static RP role played by the HUB.

1 – CONFIGURATION

HUB:

Frame Relay configuration:

interface Serial0/0

ip address 192.168.100.1 255.255.255.0

encapsulation frame-relay

!! Inverse ARP is disabled

no frame-relay inverse-arp

!! OSPF protocol have to be consistent with the Frame Relay !!topology, in this case it is HUB & Spoke in which traffic destined !!to spokes will be forwarded to the HUB then from there to the !!spoke.


ip ospf network point-to-multipoint

!! With inverse ARP disabled you have to configure FR mapping and do !!not forget to enable “pseudo broadcasting” at the end of frame !!relay maps

frame-relay map ip 192.168.100.2 101 broadcast

frame-relay map ip 192.168.100.3 103 broadcast

Multicast configuration:

interface Loopback0

ip address 192.168.101.1 255.255.255.255

!! Enable Sparse mode on all interfaces

ip pim sparse-mode

interface Serial0/0

!! Enable PIM NBMA mode


ip pim nbma-mode

ip pim sparse-mode

!! Enable multicast routing, the IOS will alert you if you are !!trying to use multicast commands without if you forgot it

ip multicast-routing

ip pim rp-address 192.168.101.1

Routing configuration:

router ospf 10

network 10.10.10.0 0.0.0.255 area 100


network 192.168.100.0 0.0.0.255 area 0

SpokeA:

Frame Relay configuration:

interface Serial0/0

no ip address

encapsulation frame-relay

no frame-relay inverse-arp

interface Serial0/0.201 multipoint

ip address 192.168.100.2 255.255.255.0

frame-relay map ip 192.168.100.1 201 broadcast 

Multicast configuration:

interface Serial0/0.201 multipoint

ip pim nbma-mode

ip pim sparse-mode

ip pim rp-address 192.168.101.1

ip multicast-routing 

Routing protocol configuration:

router ospf 10

network 20.20.20.0 0.0.0.255 area 200

network 192.168.100.0 0.0.0.255 area 0

interface Serial0/0.201 multipoint

ip ospf network point-to-multipoint 

SpokeB:

Frame Relay configuration:

interface Serial0/0

no ip address

encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

interface Serial0/0.301 multipoint

ip address 192.168.100.3 255.255.255.0

frame-relay map ip 192.168.100.1 301 broadcast 

Multicast configuration:

ip pim rp-address 192.168.101.1

ip multicast-routing

interface Serial0/0.301 multipoint

ip pim sparse-mode

ip pim nbma-mode 

Routing Protocol configuration:

router ospf 10

network 192.168.40.0 0.0.0.255 area 300

network 192.168.100.0 0.0.0.255 area 0

interface Serial0/0.301 multipoint

ip ospf network point-to-multipoint

 

2 – ANALYSIS

SpokeB:

SpokeB#sh ip route

Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP

D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is not set

 

20.0.0.0/32 is subnetted, 1 subnets

O IA 20.20.20.20 [110/129] via 192.168.100.1, 00:01:16, Serial0/0.301

C 192.168.40.0/24 is directly connected, FastEthernet1/0

10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks

O IA 10.0.0.2/32 [110/66] via 192.168.100.1, 00:01:16, Serial0/0.301

O IA 10.10.10.0/24 [110/65] via 192.168.100.1, 00:01:16, Serial0/0.301

O IA 10.0.0.1/32 [110/66] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.102.0/32 is subnetted, 1 subnets

O 192.168.102.1 [110/65] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.100.0/24 is variably subnetted, 3 subnets, 2 masks

C 192.168.100.0/24 is directly connected, Serial0/0.301

O 192.168.100.1/32 [110/64] via 192.168.100.1, 00:01:16, Serial0/0.301

O 192.168.100.2/32 [110/128] via 192.168.100.1, 00:01:16, Serial0/0.301

192.168.101.0/32 is subnetted, 1 subnets

O 192.168.101.1 [110/65] via 192.168.100.1, 00:01:17, Serial0/0.301

SpokeB#

The connectivity is successful and SpokeB has received correct routing information about the network

SpokeB#sh ip pim rp

Group: 239.255.1.1, RP: 192.168.101.1, uptime 00:17:31, expires
never

Group: 224.0.1.40, RP: 192.168.101.1, uptime 00:17:31, expires never

SpokeB# 

Because we use PIM-SM with static RP, the group-to-RP mapping will never expire

SpokeB#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:46:09/stopped, RP 192.168.101.1, flags: SJC

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse, 00:46:09/00:02:38

 

(10.10.10.1, 239.255.1.1), 00:22:22/00:02:50, flags: JT

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

FastEthernet1/0, Forward/Sparse, 00:22:22/00:02:38

 

(*, 224.0.1.40), 00:18:10/00:02:24, RP 192.168.101.1, flags: SJCL

Incoming interface: Serial0/0.301, RPF nbr 192.168.100.1

Outgoing interface list:

Serial0/0.301, 192.168.100.3, Forward/Sparse, 00:14:36/00:00:21

 

SpokeB# 

 

SpokeB#mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM Reached RP/Core [10.10.10.0/24]

SpokeB# 

According to the two previous outputs, The spokeB (PIM-DR) has joined the shared (RPT) tree rooted at the RP/Core (the HUB) and the traffic across this shared tree is entering the FR interface and forwarded to the LAN interface Fa1/0.

Because it is a static RP configuration there is only one multicast group service 224.0.1.40 through which a shared tree is built for PIM-SM routers to communicate with the RP.

HUB:

HUB#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:48:20/00:03:22, RP 192.168.101.1, flags: S

Incoming interface: Null, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.3, Forward/Sparse, 00:48:20/00:03:22


Serial0/0, 192.168.100.2, Forward/Sparse, 00:48:20/00:03:09

 

(10.10.10.1, 239.255.1.1), 00:25:33/00:03:26, flags: T

Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:25:33/00:03:08


Serial0/0, 192.168.100.3, Forward/Sparse, 00:25:33/00:03:22

 

(10.10.10.3, 239.255.1.1), 00:00:22/00:02:37, flags: T

Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:00:22/00:03:08


Serial0/0, 192.168.100.3, Forward/Sparse, 00:00:22/00:03:22

 

(192.168.100.1, 239.255.1.1), 00:00:23/00:02:36, flags:

Incoming interface: Serial0/0, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:00:23/00:03:07


Serial0/0, 192.168.100.3, Forward/Sparse, 00:00:23/00:03:20

 

(*, 224.0.1.40), 00:32:02/00:03:09, RP 192.168.101.1, flags: SJCL

Incoming interface: Null, RPF nbr 0.0.0.0


Outgoing interface list:


Serial0/0, 192.168.100.2, Forward/Sparse, 00:32:00/00:03:09


Serial0/0, 192.168.100.3, Forward/Sparse, 00:32:02/00:02:58

 

HUB# 

Note that for all RPT and SPT the HUB consider two “next-hops” out of the same Serial0/0 interface, this is the work of PIM NBMA mode that makes the layer3 PIM aware of the real layer 2 topology which is not Broadcast but separated “point-to-point PVCs”, as against using only pseudo broadcast that will give the layer 3 an illusion of multi-access network.

Because the shared tree (*,239.255.1.1) is originated locally, the incoming interface list is null.

A source routed tree (10.10.10.1, 239.255.1.1) is also built between the source and the RP.

SpokeA:

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:07:17/stopped, RP 192.168.101.1, flags: SJCLF

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:07:17/00:02:39

 

(10.10.10.1, 239.255.1.1), 00:43:08/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:43:08/00:02:39

 

(*, 224.0.1.40), 00:38:39/00:02:42, RP 192.168.101.1, flags: SJCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:38:39/00:02:42

 

SpokeA# 

 

SpokeA#mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM Reached RP/Core [10.10.10.0/24]

SpokeA# 

SpokeA has received the first packet through the RP, then switched to SPT (10.10.10.1, 239.255.1.1).

 

II – Scenario II

In this scenario the SpokeB play the role of the RP.

1 – CONFIGURATION

Multicast configuration:

The new static RP should be configured on all routers, 192.168.103.1 is a SpokeB loopback interface and it is advertised and reached from everywhere.

SpokeB:

ip pim rp-address 192.168.103.1

 

SpokeB(config)#do mtrace 10.10.10.1 192.168.40.104 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 192.168.40.104 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 192.168.40.104

-1 192.168.40.1 PIM Reached RP/Core [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

SpokeB(config)# 

Now the multicast traffic received by SpokeB client is forwarded through the new RP (SpokeB) which is in the best path from the source to the receiver.

SpokeA:

SpokeA#mtrace 10.10.10.1 20.20.20.20 239.255.1.1

Type escape sequence to abort.

Mtrace from 10.10.10.1 to 20.20.20.20 via group 239.255.1.1

From source (?) to destination (?)

Querying full reverse path…

0 20.20.20.20

-1 20.20.20.20 PIM [10.10.10.0/24]

-2 192.168.100.1 PIM [10.10.10.0/24]

-3 10.10.10.1

SpokeA# 

 

SpokeA#sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:24:27/stopped, RP 192.168.103.1, flags: SJCLF

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Loopback0, Forward/Sparse, 01:24:27/00:02:29

 

(10.10.10.1, 239.255.1.1), 01:00:18/00:02:59, flags: LJT

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:00:18/00:02:29

 

(*, 224.0.1.40), 00:06:18/00:02:37, RP 192.168.103.1, flags: SJCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

Serial0/0.201, 192.168.100.1, Forward/Sparse, 00:06:02/00:00:37

Loopback0, Forward/Sparse, 00:06:18/00:02:37

 

SpokeA# 

It seems that SpokeA has switched from RPT to SPT (10.10.10.1,239.255.1.1) (SJCLF: Sparse mode, join SPT, Register Flag and locally connected requester)

By the way let’s see what will happen if we disable the SPT switchover by providing the following command at SpokeA:

ip pim spt-threshold infinity

 

SpokeA(config)#do sh ip mroute

IP Multicast Routing Table

Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 01:44:47/00:02:59, RP 192.168.103.1, flags: SCLF

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 01:44:47/00:02:17

 

(*, 224.0.1.40), 00:26:39/00:02:11, RP 192.168.103.1, flags: SCL

Incoming interface: Serial0/0.201, RPF nbr 192.168.100.1

Outgoing interface list:

Loopback0, Forward/Sparse, 00:26:39/00:02:11

 

SpokeA(config)# 

No more SPT! only shared tree (*.239.255.1.1) rooted at the RP (spokeB)

%d bloggers like this: