DMVPN animation


Here is an interactive animation of DMVPN (Dynamic Multipoint VPN), followed by a detailed offline lab (a snapshot of the topology under test with hopefully all commands needed for analysis and study).

Finally, check your understanding of the fundamental concepts by taking a small quiz.

Studied topology:

DMVPN animation

Animation

Offline Lab

You might consider the following key points for troubleshooting:

Routing protocols:

To avoid RPF failure, announce routing protocols only through tunnel interfaces.

EIGRP

  • Turn off “next-hop-self” to makes spokes speak directly. Without it traffic between spokes will always pass through the HUB and NHRP resolution will not occur.
  • Turn off “split-horizon” to allow eigrp to advertise a received route from one spoke to another spoke through the same interface.
  • Turn off sumarization
  • Pay attention to the bandwidth required for EIGRP communication. requires BW > tunnel default BW “bandwidth 1000”

OSPF

  • “ip ospf network point-to-multipoint”, allows only phase1 (Spokes Data plane communication through the HUB)
  • “ip ospf broadcast” on all routers allows Phase2 (Direct Spoke-to-spoke Data plane communication)
  • Set the ospf priority on the HUBs (DR/BDR) to be bigger than the priority on spokes (“ip ospf priority 0”).
  • Make sure OSPF timers match if spokes and the HUB use different OSPF types.
  • Because spokes are generally low-end devices, they probably can’t cope with LSA flooding generated within the OSPF domain. Therefore, it’s recommended to make areas Stubby (filter-in LSA5 from external areas) or totally stubby (neither LSA5 nor inter-area LSA3 are accepted)

Make sure appropriate MTU value matches between tunnel interfaces (“ip mtu 1400 / ip tcp mss-adjust 1360”)

Consider the OSPF scalability limitation (50 routers per area). OSPF requires much more tweekening for large scale deployments.

Layered approach:

DMVPN involves multiple layers of technologies (mGRE, routing, NHRP, IPSec), troubleshooting an issue can be very tricky.

To avoid cascading errors, test your configuration after each step and move forward only when the current step works fine. For example: IPSec encryption is not required to the functioning of DMVPN, so make sure your configuration works without it and only then you add it (set IPSEc parameters and just add “tunnel protection ipsec profile” to the tunnel interface).

Quiz

Read more of this post

Multicast over FR NBMA part4 – (multipoint GRE and DMVPN)


This is the fourth part of the document “Multicast over FR NBMA”, this lab focus on deploying multicast over multipoint GRE and DMVPN.

The main advantage of GRE tunneling is its transportation capability, non-ip, broadcast and multicast traffic can be encapsulated inside the unicast GRE which is easily transmitted over Layer2 technologies such Frame Relay and ATM.

Because HUB, SpokeA and SpokeB FR interfaces are in multipoint, we will use multipoint GRE.

Figure1 : lab topology


CONFIGURATION

mGRE configuration:

HUB:

interface Tunnel0
ip address 172.16.0.1 255.255.0.0
no ip redirects

!!PIM sparse-dense mode is enabled on the tunnel not on the physical interface


ip pim sparse-dense-mode

!! a shared key is used for tunnel authentication


ip nhrp authentication cisco

!!The HUB must send all multicast traffic to all spokes that has registered to it


ip nhrp map multicast dynamic

!! Enable NHRP on the interface, must be the same for all participants


ip nhrp network-id 1

!!Because the OSPF network type is broadcast a DR will be elected, so the HUB is assigned the biggest priority to be sure that it will be the DR


ip ospf network broadcast


ip ospf priority 10

!! With small HUB and Spoke networks it is possible to configure static mGRE by pre-configuring the tunnel destination, but will not be able to set the tunnel mode


tunnel source Serial0/0


tunnel mode gre multipoint

!! Set the tunnel identification key and must be identical to the network-id previously configured


tunnel key 1

FR configuration:

interface Serial0/0
ip address 192.168.100.1 255.255.255.0
encapsulation frame-relay

serial restart-delay 0

frame-relay map ip 192.168.100.2 101
broadcast

frame-relay map ip 192.168.100.3 103
broadcast

no frame-relay inverse-arp

Routing configuration:

router ospf 10
router-id 1.1.1.1
network 10.10.20.0 0.0.0.255 area 100


network 172.16.0.0 0.0.255.255 area 0

SpokeA:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.2 255.255.0.0
ip nhrp authentication cisco

!!All multicast traffic will be forwarded to the NBMA next hop IP (HUB).

ip nhrp map multicast 192.168.100.1

!!All spokes know in advance the HUB NBMA and tunnel IP addresses which are static.

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network point-to-multipoint

tunnel source Serial0/0.201

tunnel destination 192.168.100.1

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.201 multipoint

ip address 192.168.100.2 255.255.255.0

frame-relay map ip 192.168.100.1 201 broadcast

Routing configuration:

router ospf 10
router-id 200.200.200.200
network 20.20.20.0 0.0.0.255 area 200

network 172.16.0.0 0.0.255.255 area 0

SpokeB:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.3 255.255.0.0
no ip redirects

ip pim sparse-dense-mode

ip nhrp authentication cisco

ip nhrp map multicast 192.168.100.1

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network broadcast

ip ospf priority 0

tunnel source Serial0/0.301

tunnel mode gre multipoint

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.301 multipoint

ip address 192.168.100.3 255.255.255.0

frame-relay map ip 192.168.100.1 301 broadcast

Routing configuration:

router ospf 10
router-id 3.3.3.3

network 172.16.0.0 0.0.255.255 area 0

network 192.168.39.0 0.0.0.255 area 300

RP (SpokeBnet):

interface Loopback0
ip address 192.168.38.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 192.168.38.1 0.0.0.0 area 300

ip pim send-rp-announce Loopback0 scope 32

Mapping Agent (HUBnet):

interface Loopback0
ip address 10.0.0.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 10.0.0.1 0.0.0.0 area 100

ip pim send-rp-discovery Loopback0 scope 32

Here is the result:

HUB:

HUB# sh ip nhrp
172.16.0.2/32 via 172.16.0.2, Tunnel0 created 01:06:52, expire 01:34:23
Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.2

172.16.0.3/32 via 172.16.0.3, Tunnel0 created 01:06:35, expire 01:34:10

Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.3

HUB#

The HUB has dynamically learnt spoke’s NBMA addresses and corresponding tunnel ip addresses.

HUB#sh ip route
Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is not set

 

1.0.0.0/32 is subnetted, 1 subnets

C 1.1.1.1 is directly connected, Loopback0

20.0.0.0/32 is subnetted, 1 subnets

O IA 20.20.20.20 [110/11112] via 172.16.0.2, 01:08:26, Tunnel0

O IA 192.168.40.0/24 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

C 172.16.0.0/16 is directly connected, Tunnel0

192.168.38.0/32 is subnetted, 1 subnets

O IA 192.168.38.1 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

O IA 192.168.39.0/24 [110/11112] via 172.16.0.3, 01:08:26, Tunnel0

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O 10.0.0.2/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.10.10.0/24 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.0.0.1/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

C 10.10.20.0/24 is directly connected, FastEthernet1/0

C 192.168.100.0/24 is directly connected, Serial0/0

HUB#

The HUB has learnt all spokes local network ip addresses; note that all learnt routes points to the tunnel IP addresses, because the routing protocol is enabled on the top of the logical topology not the physical (figure2).

Figure2 : Logical topology

HUB#sh ip pim neighbors
PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR

Address Prio/Mode

172.16.0.3
Tunnel0 01:06:17/00:01:21 v2 1 / DR S

172.16.0.2
Tunnel0 01:06:03/00:01:40 v2 1 / S

10.10.20.3 FastEthernet1/0 01:07:24/00:01:15 v2 1 / DR S

HUB#

PIM neighbor relationships are established after enabling PIM-Sparse-dense mode on tunnel interfaces.

SpokeBnet#
*Mar 1 01:16:22.055: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181
*Mar 1 01:16:22.059: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

*Mar 1 01:16:22.067: Auto-RP: Send RP-Announce packet on Loopback0

SpokeBnet#

The RP (SpokeBnet) send RP-announces to all those who listen to 224.0.1.39

Hubnet#
*Mar 1 01:16:17.039: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181
*Mar 1 01:16:17.043: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

Hubnet#

*Mar 1 01:16:49.267: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:16:49.271: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

Hubnet#

HUBnet, the mapping agent (MA), listening to 224.0.1.39, has received RP-announces from the RP (SpokeBnet), has updated its records and has sent RP-Discovery to all PIM-SM routers at 224.0.1.40

HUB#
*Mar 1 01:16:47.059: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181
*Mar 1 01:16:47.063: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

HUB#

 

HUB#sh ip pim rp
Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 01:11:49, expires 00:02:44
HUB#

The HUB, as an example, has received the RP-to-group mapping information from the Mapping agent and now know the RP IP address.

Now let’s take a look at the multicast routing table of the RP:

SpokeBnet#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:39:00/stopped, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(10.10.10.1, 239.255.1.1), 00:39:00/00:02:58, flags: T


Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1


Outgoing interface list:


FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(*, 224.0.1.39), 01:24:31/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(192.168.38.1, 224.0.1.39), 01:24:31/00:02:28, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(*, 224.0.1.40), 01:25:42/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:25:40/00:00:00

Loopback0, Forward/Sparse-Dense, 01:25:42/00:00:00

 

(10.0.0.1, 224.0.1.40), 01:23:39/00:02:51, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 01:23:39/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – The shared tree, rooted at the RP, used to push multicast traffic to receivers, “J” flag indicates that traffic has switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – SPT used to forward traffic from the source to the receiver, receive traffic on Fa0/0 ans forward it out of Fa1/0.

(*, 224.0.1.39) and (*, 224.0.1.40) – service group multicast, because it is a PIM sparse-dense mode, traffic for these groups were forwarded to all PIM routers using dense mode, hence the flag “D”.

This way we configured multicast over NBMA using mGRE, no layer2, no restrictions.

By the way, we are just one step far from DMVPN 🙂 all we have to do is configure IPSec VPN that will protect our mGRE tunnel, so let’s do it!

!! IKE phase I parameters
crypto isakmp policy 1
!! 3des as the encryption algorithm

encryption 3des

!! authentication type: simple preshared keys

authentication pre-share

!! Diffie Helman group2 for the exchange of the secret key

group 2

!! isakmp pees are not set because the HUB doesn’t know them yet, they are learned dynamically by NHRP within mGRE

crypto isakmp key cisco address 0.0.0.0 0.0.0.0

crypto ipsec transform-set MyESP-3DES-SHA esp-3des esp-sha-hmac

mode transport

crypto ipsec profile My_profile

set transform-set MyESP-3DES-SHA

 

int tunnel 0

tunnel protection ipsec profile My_profile

 

HUB#sh crypto isakmp sa
dst src state conn-id slot status
192.168.100.1 192.168.100.2 QM_IDLE 2 0 ACTIVE

192.168.100.1 192.168.100.3 QM_IDLE 1 0 ACTIVE

 

HUB#

 

HUB#sh crypto ipsec sa
 interface: Tunnel0

Crypto map tag: Tunnel0-head-0, local addr 192.168.100.1

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.2/255.255.255.255/47/0)


current_peer 192.168.100.2 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1248, #pkts encrypt: 1248, #pkts digest: 1248

#pkts decaps: 129, #pkts decrypt: 129, #pkts verify: 129

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 52, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.2

path mtu 1500, ip mtu 1500

current outbound spi: 0xCEFE3AC2(3472767682)

 

inbound esp sas:


spi: 0x852C8AE0(2234288864)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2003, flow_id: SW:3, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4448676/3482)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xCEFE3AC2(3472767682)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2004, flow_id: SW:4, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4447841/3479)

IV size: 8 bytes

replay detection support: Y

Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.3/255.255.255.255/47/0)

current_peer 192.168.100.3 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1309, #pkts encrypt: 1309, #pkts digest: 1309

#pkts decaps: 23, #pkts decrypt: 23, #pkts verify: 23

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 26, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.3

path mtu 1500, ip mtu 1500

current outbound spi: 0xD5D509D2(3587508690)

 

inbound esp sas:


spi: 0x4507681A(1158113306)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2001, flow_id: SW:1, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4588768/3477)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xD5D509D2(3587508690)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2002, flow_id: SW:2, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4587889/3476)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

HUB#

ISAKMP and IPSec phases are successfully established and security associations are formed.

multicast over DMVPN works perfectly! That’s it!

Multicast over FR NBMA part1 – (pseudo broadcast, PIM NBMA mode, mGRE and DMVPN)


This lab focus on Frame Relay NBMA, Non-Broadcast Multiple Access… the confusion already begin from the title : ) a multiple access network shouldn’t support broadcast?

In this first part I will try to describe the problematic forwarding of multicast over FR NBMA as well as different solutions to remediate to the problem, subsequent parts will focus on the deployment of those solution.

Well, the issue is a result of the difference between how layer2 operates and how layer 3 consider the result of layer2 functioning.

Let’s begin by the layer2, in the case of Frame Relay it is composed of Permanent Virtual Circuits each one is delimited by a physical interface in the both ends and a local Data Link Connection identifier (DLCI)mapped to each end; so there is no multi-access network at all at the layer 2, just the fact that multiple PVCs ends with a single physical interface or sub-interface gives the illusion that it is a multiple access network (figure1).

Figure1 : layer2 and layer3 views of the network

With such configuration of multipoint interfaces or sub-interfaces a single subnet is used at the layer 3 for all participant of the NBMA which makes the layer think that it is a multiple access network.

Let’s precise that we are talking here about partial meshed NBMA.

All this misunderstanding between layer2 and layer3 about the nature of the medium makes that broadcast and multicast doesn’t work.

Many solutions are proposed:

1 – “Pseudo broadcast” – IOS feature that emulates the work of multicast.

Figure2 : Interface HW queues

In fact a physical interface has two hardware queues (figure2) one for unicast and one (strict priority) for broadcast traffic, and because the layer 3 think that it is a multi-access network it will send only one packet over the interface and hope all participants in the other part will receive that packet, which will not happen of course.

Do you remember the word “broadcast” added at the end of a static frame relay mapping?

Frame-relay map ip X.X.X.X <DLCI> broadcast

This activate the feature of pseudo broadcasting, the router before sending a broadcast packet, will make “n” copies and send them to the “n” spokes attached to the NBMA whether they requested it or not. Finally such mechanism is against the concept of multicasting, therefore pseudo broadcast can lead to serious issues of performance in large multicast networks.

Another issue with pseudo broadcast in an NBMA HUB & spoke topology is when one spoke forward multicast data or control traffic that other spokes are supposed to receive, particularly PIM prune and override messages.

The concept is based on the fact that if one downstream PIM router send a prune message to the upstream router, other downstream routers connected to the same multi-access network will receive the prune and those who still want to receive the multicast traffic will send a prune override (join) message to the upstream router.

As we mentioned before the HUB is capable of emulating multicast by making multiple copies of the packet, but only for traffic forwarded from HUB internal interfaces to the NBMA interface or just generated at the HUB router, but not if traffic is received from the NBMA interface.

Conclusion è do not use PIM-DM nor PIM-SPARSE-DENSE (auto-RP) mode with pseudo broadcasting, however you can use PIM-SM with static RP.


2 – PIM NBMA mode
– makes more clear relationship between layer2 and layer3, NBMA mode will tell “the truth” to the layer 3: that the medium is in reality not a multi-access but a partial meshed NBMA so PIM can act accordingly.

The router now will track IP addresses of neighbors from which it received a join message and make an outgoing interface list of such neighbors for RPT (*,G) and SPT (S,G).

PIM NBMA mode still do not support DENSE mode nevertheless it is possible to use Auto-RP with PIM-SPARSE-DENSE mode ONLY if you make sure that the mapping agent is located on the HUB network so it can communicate with all spokes.

3 – Point-to-point sub-interfaces – If you want to avoid all previously mentioned complications with multipoint interfaces in an NBMA HUB & Spoke, you can choose to deploy point-to-point sub-interfaces, spokes will be treated as they are connected to the HUB through separated physical interfaces with separated IP subnets and the layer 2 is no more considered multi-access.

4 – GRE Tunneling – I would like to mention another alternative by keeping the NBMA multipoint FR interface with all that can be considered as advantages such saving IP addresses, and just make it transparent by deploying tunnel technology multipoint-GRE.

With a multipoint GRE logical topology over the layer3 in which multicast will be encapsulated inside GRE packets and forwarded as unicast and decapsulated at the layer3 on the other side of the tunnel without dealing with layer2.

GRE (Generic Routing Encapsulation) : Point-to-point & multipoint GRE


– GRE (Generic Routing Encapsulation) IP protocol 47 is a tunneling protocol that encapsulate any network layer packet.

GRE provide the possibility to route IP packets between private IP networks across public networks with globally routed IP addresses. GRE is a unicast protocol hence the big advantage of encapsulating broadcast, multicast traffic (multicast streaming or routing protocols) or other non-IP protocols, and being protected by IPSec that doesn’t support unicast.

Here is the structure of the post:

Point-to-point

  • Tunnel configuration
  • Routing

Multipoint

  • Tunnel configuration
  • Routing
    • OSPF
    • EIGRP
  1. Point-to-point

    Figure1 depicts the physical topology on which point-to-point GRE tunneling is configured.

    Figure1:FR topology with point-to-point sub-interfaces


  2. Tunnel configuration

    Table1: Point-to-point GRE parameters on HUB

    Tunnelling parameters

    SpokeA

    SpokeB

    SpokeC

    Tunnel interface Tunnel 1 Tunnel 1 Tunnel 1
    Tunnel ip address &mask 192.168.10.1/24 192.168.20.1/24 192.168.30.1
    Tunnel source interface s0/0.101 s0/0.102 s0/0.103
    Tunnel destination interface 172.16.0.17 172.16.0.34 172.16.0.50

    Tunnel with SpokeA:

    HUB:

    HUB(config-if)#int tunnel1
    HUB(config-if)#ip address 192.168.10.1 255.255.255.0
    HUB(config-if)#tunnel mode gre ip

    HUB(config-if)#tunnel source s0/0.101

    HUB(config-if)#tunnel destination 172.16.0.18

    HUB(config-if)#

    *Mar  1 01:36:42.875: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to up

    HUB(config-if)#

    SpokeA:

    Table2:  Point-to-point GRE parameters on SpokeA

    Tunnelling parameters

    HUB

    Tunnel interface Tunnel 1
    Tunnel ip address &mask 192.168.10.2
    Tunnel source interface s0/0
    Tunnel destination interface 172.16.0.17
    SpokeA(config)#int tunnel 1
    SpokeA(config-if)#ip address
    SpokeA(config-if)#ip address 192.168.10.2 255.255.255.0

    SpokeA(config-if)#tunnel mode gre ip

    SpokeA(config-if)#tunnel source s0/0

    SpokeA(config-if)#tunnel destination 172.16.0.17

    SpokeA(config-if)#

    *Mar  1 01:38:27.703: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to up

    SpokeA(config-if)#

    Connectivity check and debugging:

    SpokeA(config-if)#do ping 192.168.10.1
    Type escape sequence to abort.
    Sending 5, 100-byte ICMP Echos to 192.168.10.1, timeout is 2 seconds:

    !!!!!

    Success rate is 100 percent (5/5), round-trip min/avg/max = 76/102/128 ms

    SpokeA(config-if)#

    HUB#debug tunnel

    Tunnel Interface debugging is on

    HUB#

    *Mar  1 01:45:25.311: Tunnel1: GRE/IP to classify 172.16.0.18->172.16.0.17 (len=124 type=0x800 ttl=254 tos=0x0)

    !!GRE packet source and destination

    *Mar  1 01:45:25.315: Tunnel1: GRE/IP to decaps 172.16.0.18->172.16.0.17 (len=124 ttl=254)

    *Mar  1 01:45:25.319: Tunnel1: GRE decapsulated IP 192.168.10.2->192.168.10.1 (len=100, ttl=254)

    !! the encapsulated packet source and destination

    *Mar  1 01:45:25.323: Tunnel1: GRE/IP encapsulated 172.16.0.17->172.16.0.18 (linktype=7, len=124)

    *Mar  1 01:45:25.419: Tunnel1: GRE/IP to classify 172.16.0.18->172.16.0.17 (len=124 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:45:25.423: Tunnel1: GRE/IP to decaps 172.16.0.18->172.16.0.17 (len=124 ttl=254)

    *Mar  1 01:45:25.427: Tunnel1: GRE decapsulated IP 192.168.10.2->192.168.10.1 (len=100, ttl=254)

    *Mar  1 01:45:25.431: Tunnel1: GRE/IP encapsulated 172.16.0.17->172.16.0.18 (linktype=7, len=124)

    *Mar  1 01:45:25.483: Tunnel1: GRE/IP to classify 172.16.0.18->172.16.0.17 (len=124 type=0x800 ttl=254 tos=0x0)


    HUB#

    SpokeB:

    Table3:  Point-to-point GRE parameters on SpokeB

    Tunnelling parameters

    HUB

    Tunnel interface Tunnel 1
    Tunnel ip address &mask 192.168.20.2
    Tunnel source interface s0/0
    Tunnel destination interface 172.16.0.34

    HUB:

    HUB(config)#int tunnel 2
    HUB(config-if)#ip address 192.168.20.1 255.255.255.0
    HUB(config-if)#tunnel mode gre ip

    HUB(config-if)#tunnel source s0/0.102

    HUB(config-if)#tunnel destination 172.16.0.34

    HUB(config-if)#

    *Mar  1 01:56:36.851: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel2, changed state to up

    HUB(config-if)#

    SpokeB:

    SpokeB(config)#int tunnel 1
    SpokeB(config-if)#ip addres
    SpokeB(config-if)#ip address 192.168.20.2 255.255.255.0

    SpokeB(config-if)#tunnel mode gre ip

    SpokeB(config-if)#tunnel source s0/0

    SpokeB(config-if)#tunnel dest 172.16.0.33

    SpokeB(config-if)#

    *Mar  1 02:03:10.971: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to up

    SpokeB(config-if)#

    Connectivity Check:

    SpokeB(config-if)#do ping 192.168.20.1

    Type escape sequence to abort.

    Sending 5, 100-byte ICMP Echos to 192.168.20.1, timeout is 2 seconds:

    !!!!!

    Success rate is 100 percent (5/5), round-trip min/avg/max = 60/87/112 ms

    SpokeB(config-if)#

    SpokeC:

    Table4:  Point-to-point GRE parameters on SpokeC

    Tunnelling parameters

    HUB

    Tunnel interface Tunnel 1
    Tunnel ip address &mask 192.168.30.2
    Tunnel source interface s0/0.301
    Tunnel destination interface 172.16.0.49

    HUB:

    HUB(config-if)#int tunnel 3
    HUB(config-if)#ip address 192.168.30.1 255.255.255.0
    HUB(config-if)#tunnel mode gre ip

    HUB(config-if)#tunnel source s0/0.103

    HUB(config-if)#tunnel destination 172.16.0.50

    HUB(config-if)#

    *Mar  1 01:59:07.255: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel3, changed state to up

    SpokeC:

    SpokeC(config)#int tunnel 1
    SpokeC(config-if)#ip address 192.168.30.2 255.255.255.0
    SpokeC(config-if)#tunnel mode gre ip

    SpokeC(config-if)#tunnel source s0/0.301

    SpokeC(config-if)#tunnel des 172.16.0.49

    SpokeC(config-if)#

    *Mar  1 02:08:17.751: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to up

    SpokeC(config-if)#

    Connectivity check:

    SpokeC(config-if)#do ping 192.168.30.1

    Type escape sequence to abort.

    Sending 5, 100-byte ICMP Echos to 192.168.30.1, timeout is 2 seconds:

    !!!!!

    Success rate is 100 percent (5/5), round-trip min/avg/max = 24/94/144 ms

    SpokeC(config-if)#

  3. Routing

    Routing protocols:

    HUB:

    HUB(config-if)#router ospf 10
    HUB(config-router)#router-id 1.1.1.1
    HUB(config-router)#net 192.168.10.0 0.0.0.255 area 0

    HUB(config-router)#net 192.168.20.0 0.0.0.255 area 0

    HUB(config-router)#net 192.168.30.0 0.0.0.255 area 0

    HUB(config-router)#net 10.0.1.0 0.0.0.255 area 10

    HUB(config-router)#

    Connectivity check:

    HUB#sh ip ospf int s0/0.101

    %OSPF: OSPF not enabled on Serial0/0.101

    Note that OSPF is not enabled over the interface facing the public network with routable IP (172.16.0.0/16 for the purpose of the lab), but over tunnel interfaces; and the default OSPF mode is point-to-point which match the GRE tunnel mode.

    HUB:

    HUB#sh ip ospf int tunnel 1
    Tunnel1 is up, line protocol is up
    Internet Address 192.168.10.1/24, Area 0

    Process ID 10, Router ID 1.1.1.1, Network Type POINT_TO_POINT, Cost: 11111

    Transmit Delay is 1 sec, State POINT_TO_POINT,

    Timer intervals configured, Hello 10, Dead 40, Wait 40, Retransmit 5

    oob-resync timeout 40

    Hello due in 00:00:03

    Supports Link-local Signaling (LLS)

    Index 1/1, flood queue length 0

    Next 0x0(0)/0x0(0)

    Last flood scan length is 0, maximum is 0

    Last flood scan time is 0 msec, maximum is 0 msec

    Neighbor Count is 0, Adjacent neighbor count is 0

    Suppress hello for 0 neighbor(s)

    HUB#

    HUB(config-router)#do sh ip route


    C    192.168.30.0/24 is directly connected, Tunnel3

    C    192.168.10.0/24 is directly connected, Tunnel1

    172.16.0.0/28 is subnetted, 3 subnets

    C       172.16.0.48 is directly connected, Serial0/0.103

    C       172.16.0.32 is directly connected, Serial0/0.102

    C       172.16.0.16 is directly connected, Serial0/0.101

    C    192.168.20.0/24 is directly connected, Tunnel2

    10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

    O IA    10.10.0.1/32 [110/11112] via 192.168.10.2, 00:01:17, Tunnel1

    C       10.0.1.0/24 is directly connected, Loopback0

    O IA    10.30.0.1/32 [110/11112] via 192.168.30.2, 00:01:16, Tunnel3

    O IA    10.20.0.1/32 [110/11112] via 192.168.20.2, 00:01:17, Tunnel2

    HUB(config-router)#

    Received Inter-area routes point to tunnel interfaces and reachable through the other side of GRE tunnels.

  4. Multipoint

    Figure2 depicts the Frame Relay topology on which multipoint GRE tunneling is configured, multipoint GRE operates at the network layer, nevertheless the layer3 topology must be consistent with the layer below, where Frame Relay operates and where PVC are strictly configured and mapped to NBMA addresses.

    Figure2:FR topology with multipoint sub-interfaces


  5. Tunnel configuration

    HUB:

    Table5: multipoint GRE parameters on HUB

    Tunnelling parameters

    Any multipoint tunnel peer

    Tunnel interface Tunnel 1
    Tunnel ip address &mask 192.168.123.1/24
    Tunnel source interface s0/0.102
    Tunnel destination interface ??????????????????????

    HUB:

    HUB(config-if)#int tunnel 2
    HUB(config-if)#ip address 192.168.123.1 255.255.255.0
    HUB(config-if)#tunnel mode gre multipoint

    HUB(config-if)#tunnel source s0/0.102

    HUB(config-if)#

    *Mar  1 01:00:07.707: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel2, changed state to up

    SpokeB:

    Table6: multipoint GRE parameters on SpokeB

    Tunnelling parameters

    Any multipoint tunnel peer

    Tunnel interface Tunnel 1
    Tunnel ip address &mask 192.168.123.2/24
    Tunnel source interface s0/0
    Tunnel destination interface ??????????????????????

    SpekeB:

    SpokeB(config-if)#int tunnel 1
    SpokeB(config-if)#ip address 192.168.123.2 255.255.255.0
    SpokeB(config-if)#tunnel mode gre multipoint

    SpokeB(config-if)#tunnel source s0/0

    SpokeB(config-if)#

    *Mar  1 01:00:27.419: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to up

    SpokeC:

    Table7: multipoint GRE parameters on SpokeC

    Tunnelling parameters

    Any multipoint tunnel peer

    Tunnel interface Tunnel 1
    Tunnel ip address &mask 192.168.123.3/24
    Tunnel source interface s0/0.300
    Tunnel destination interface ??????????????????????

    SpokeC :

    SpokeC(config-subif)#int tunnel 1
    SpokeC(config-if)#ip address 192.168.123.3 255.255.255.0
    SpokeC(config-if)#tunnel mode gre multipoint

    SpokeC(config-if)#tunnel source s0/0.300

    SpokeC(config-if)#

    *Mar  1 01:02:13.995: %LINEPROTO-5-UPDOWN: Line protocol on Interface Tunnel1, changed state to up

    HUB:

    HUB(config-if)#do sh int tunnel 2
    Tunnel2 is up, line protocol is up
    Hardware is Tunnel

    Internet address is 192.168.123.1/24

    MTU 1514 bytes, BW 9 Kbit, DLY 500000 usec,

    reliability 255/255, txload 1/255, rxload 1/255

    Encapsulation TUNNEL, loopback not set

    Keepalive not set

    Tunnel source 172.16.0.33 (Serial0/0.102), destination UNKNOWN

    Tunnel protocol/transport multi-GRE/IP

    Key disabled, sequencing disabled

    Checksumming of packets disabled

    Fast tunneling enabled

    Tunnel transmit bandwidth 8000 (kbps)

    Tunnel receive bandwidth 8000 (kbps)


    HUB(config-if)#

    At this point you cannot miss the fact that it is not possible to set in advance the tunnel destination interface (GRE destination address) for multipoint GRE, here resides the concept of mGRE: one interface that ends multiple tunnels; hence the need for a mechanism that will identify  the remote peer’s tunnel end point ip address in advance, NHRP (Next Hop Resolution Protocol).

    Each spoke has been configured with a static map of the HUB tunnel IP address (192.168.123.1) and its NBMA address (172.16.0.33)

    ip nhrp map 192.168.123.1 172.16.0.33

    So each spoke NHRP starts registering itself by forming a tunnel with the HUB, the destination interface is taken from the previously configured static map.

    Finally the concept is very similar to the mapping in Frame Relay where a destination NBMA IP (layer3) is mapped to a DLCI ( PVC, layer 2) either statically or dynamically through inverse ARP.

    With mGRE a tunnel (logical source and destination IP’s) is necessary delimited by NBMA addresses, the destination one being unknown can be resolved statically through NHRP map or dynamically by requesting the NHRP server, the HUB. (figure3)

    Figure3: FR mapping and mGRE mapping


    Once resolved each Spoke forms a tunnel with the HUB, here comes the routing protocol, OSPF is announced on tunnel interfaces, so adjacency relationships will be formed with the HUB, it is important to tell routers that broadcast and multicast will be sent to the HUB, so they could exchange routing information.

    ip nhrp map multicast 172.16.0.33

    The command is very similar to FR:

    frame-relay map ip <nbma-dest-ip-next-hop> <local-dlci> broadcast

    Though the routing information between spokes is exchanged through the router, routing information populated in routing tables points to logical addresses (tunnel IP’s) and the NBMA addresses still  unknown.

    mGRE needs to resolve the NBMA destination address; spokes ask the HUB (NHS) what NBMA address correspond to a particular tunnel (logical) IP, the HUB respond from its NHRP records, the spoke use the obtained NBMA address as a destination and communicate directly with the destination spoke.

    The following command  instruct spokes where to forward the NHRP requests, if not specified the spoke will take the next-hop from the routing table.

    ip nhrp nhs 192.168.123.1

    Figure4 depicts the data flow between spokes and HUB:

    Figure4: Data flow


    TableX: NHRP commands

    ip nhrp network-id <number> Logical NBMA cloud to which the interface belongs.
    ip nhrp interest <ACL> Tells the router when to send NHRP requests.
    ip nhrp used <number> Send NHRP requests when there are certain number of packets for a destination.
    ip nhrp map <x.x.x.x> <data-link> Statically configure a router interface with NHRP data link layer (not needed with PVC)
    ip nhrp map multicast <remote_NBMA_desti_IP> Broadcast/multicast will be sent to a particular end- point of the tunnel.
    ip nhrp nhs <x.x.x.x> network <network> Statically configure NHS, otherwise NHRP will follow routing information that will point to a HUB.
    ip nhrp authentication <string> Configure authentication for NHRP peering
    ip nhrp max-send <packets> every <time- interval> Control the maximum rate of NHRP messages sent, by default 5 packets every 10 seconds.
    ip nhrp responder <inttype X/X> Set the source address of NHRP server replies.
    ip nhrp holdtime <sec> <sec> Lifetime of NHRP information, default 7200sec (2 hours) for negative and positive responses.

    Here is how all configuration commands look like:

    HUB:

    interface Tunnel2
    ip address 192.168.123.1 255.255.255.0

    ip nhrp authentication cisco

    ip nhrp map multicast dynamic

    ip nhrp network-id 123

    ip ospf network broadcast

    ip ospf priority 10

    tunnel source Serial0/0.102

    tunnel mode gre multipoint

    tunnel key 123  !! ===> If you have set a tunnel key, it should be the same on all

    !! routers that participate to the tunnel

    !

    interface Serial0/0

    no ip address

    encapsulation frame-relay

    no frame-relay inverse-arp

    !

    interface Serial0/0.101 point-to-point

    ip address 172.16.0.17 255.255.255.240

    frame-relay interface-dlci 101

    !

    interface Serial0/0.102 multipoint

    ip address 172.16.0.33 255.255.255.240

    ip ospf network broadcast

    ip ospf priority 10

    frame-relay map ip 172.16.0.34 102 broadcast

    frame-relay map ip 172.16.0.35 103 broadcast

    SpokeB:

    interface Tunnel1
    ip address 192.168.123.2 255.255.255.0

    ip nhrp authentication cisco

    ip nhrp map multicast 172.16.0.33

    ip nhrp map 192.168.123.1 172.16.0.33

    ip nhrp network-id 123

    ip nhrp nhs 192.168.123.1

    ip ospf network broadcast

    ip ospf priority 0

    tunnel source Serial0/0

    tunnel mode gre multipoint

    tunnel key 123

    !

    interface Serial0/0

    ip address 172.16.0.34 255.255.255.240

    encapsulation frame-relay

    frame-relay map ip 172.16.0.33 201 broadcast

    frame-relay map ip 172.16.0.35 203 broadcast

    no frame-relay inverse-arp

    SpokeC:

    interface Tunnel1
    ip address 192.168.123.3 255.255.255.0

    ip nhrp authentication cisco

    ip nhrp map multicast 172.16.0.33

    ip nhrp map 192.168.123.1 172.16.0.33

    ip nhrp network-id 123

    ip nhrp nhs 192.168.123.1

    ip ospf network broadcast

    ip ospf priority 0

    tunnel source Serial0/0.300

    tunnel mode gre multipoint

    tunnel key 123

    !

    interface Loopback0

    ip address 10.30.0.1 255.255.0.0

    !

    interface Serial0/0

    no ip address

    encapsulation frame-relay

    no frame-relay inverse-arp

    !

    interface Serial0/0.300 multipoint

    ip address 172.16.0.35 255.255.255.240

    frame-relay map ip 172.16.0.33 301 broadcast

    frame-relay map ip 172.16.0.34 302 broadcast

    Make sure that tunnel key match with all routers participating in the same mGRE.

    Monitoring:

    HUB:

    HUB#sh ip nhrp

    192.168.123.2/32 via 192.168.123.2, Tunnel2 created 00:55:29, expire 01:43:40

    Type: dynamic, Flags: authoritative unique registered

    NBMA address: 172.16.0.34

    192.168.123.3/32 via 192.168.123.3, Tunnel2 created 00:55:52, expire 01:44:07

    Type: dynamic, Flags: authoritative unique registered

    NBMA address: 172.16.0.35

    HUB#

    Note from the following tunnel debugging that though tunnel destination is not pre-configured, as a result of NHRP, the router resolved the detination source interface of the tunnel according to the logical destination it wanted to reach:

    HUB#ping  10.30.0.1
    !!==>We want to reach SpokeC private network
    Type escape sequence to abort.
    Sending 5, 100-byte ICMP Echos to 10.30.0.1, timeout is 2 seconds:

    !!!!!

    *Mar  1 01:00:30.171: Tunnel2: GRE/IP to classify 172.16.0.35->172.16.0.33 (len=128 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:00:30.179: Tunnel2: GRE/IP to decaps 172.16.0.35->172.16.0.33 (len=128 ttl=254)

    *Mar  1 01:00:30.183: Tunnel2: GRE decapsulated IP 10.30.0.1->192.168.123.1 (len=100, ttl=254)

    *Mar  1 01:00:30.311: Tunnel2: GRE/IP to classify 172.16.0.35->172.16.0.33 (len=128 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:00:30.315: Tunnel2: GRE/IP to decaps 172.16.0.35->172.16.0.33 (len=128 ttl=254)

    *Mar  1 01:00:30.319: Tunnel2: GRE decapsulated IP 10.30.0.1->192.168.123.1 (len=100, ttl=254)

    Success rate is 100 percent (5/5), round-trip min/avg/max = 64/126/216 ms

    HUB#

    *Mar  1 01:00:30.451: Tunnel2: GRE/IP to classify 172.16.0.35->172.16.0.33 (len=128 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:00:30.459: Tunnel2: GRE/IP to decaps 172.16.0.35->172.16.0.33 (len=128 ttl=254)

    *Mar  1 01:00:30.459: Tunnel2: GRE decapsulated IP 10.30.0.1->192.168.123.1 (len=100, ttl=254)

    *Mar  1 01:00:30.531: Tunnel2: GRE/IP to classify 172.16.0.35->172.16.0.33 (len=128 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:00:30.539: Tunnel2: GRE/IP to decaps 172.16.0.35->172.16.0.33 (len=128 ttl=254)

    *Mar  1 01:00:30.543: Tunnel2: GRE decapsulated IP 10.30.0.1->192.168.123.1 (len=100, ttl=254)

    *Mar  1 01:00:30.595: Tunnel2: GRE/IP to classify 172.16.0.35->172.16.0.33 (len=128 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:00:30.599: Tunnel2: GRE/IP to decaps 172.16.0.35->172.16.0.33 (len=128 ttl=254)

    *Mar  1 01:00:30.603: Tunnel2: GRE decapsulated IP 10.30.0.1->192.168.123.1 (len=100, ttl=254)

    *Mar  1 01:00:32.415: Tunnel1: GRE/IP encapsulated 172.16.0.17->172.16.0.18 (linktype=7, len=104)

    HUB#

    HUB#

    HUB#ping 10.20.0.1
    !!==> Now we want to reach SpokeB private network

    Type escape sequence to abort.

    Sending 5, 100-byte ICMP Echos to 10.20.0.1, timeout is 2 seconds:

    !!!!!

    Success rate is 100 percent (5/5), round-trip min/avg/max = 76/121/164 ms

    HUB#

    *Mar  1 01:01:55.943: Tunnel2: GRE/IP to classify 172.16.0.34->172.16.0.33 (len=128 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:01:55.947: Tunnel2: GRE/IP to decaps 172.16.0.34->172.16.0.33 (len=128 ttl=254)

    *Mar  1 01:01:55.951: Tunnel2: GRE decapsulated IP 10.20.0.1->192.168.123.1 (len=100, ttl=254)

    *Mar  1 01:01:56.111: Tunnel2: GRE/IP to classify 172.16.0.34->172.16.0.33 (len=128 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:01:56.115: Tunnel2: GRE/IP to decaps 172.16.0.34->172.16.0.33 (len=128 ttl=254)

    *Mar  1 01:01:56.119: Tunnel2: GRE decapsulated IP 10.20.0.1->192.168.123.1 (len=100, ttl=254)

    *Mar  1 01:01:56.239: Tunnel2: GRE/IP to classify 172.16.0.34->172.16.0.33 (len=128 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:01:56.243: Tunnel2: GRE/IP to decaps 172.16.0.34->172.16.0.33 (len=128 ttl=254)

    *Mar  1 01:01:56.247: Tunnel2: GRE decapsulated IP 10.20.0.1->192.168.123.1 (len=100, ttl=254)

    *Mar  1 01:01:56.315: Tunnel2: GRE/IP to classify 172.16.0.34->172.16.0.33 (len=128 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:01:56.319: Tunnel2: GRE/IP to decaps 172.16.0.34->172.16.0.33 (len=128 ttl=254)

    *Mar  1 01:01:56.323: Tunnel2: GRE decapsulated IP 10.20.0.1->192.168.123.1 (len=100, ttl=254)

    *Mar  1 01:01:56.395: Tunnel2: GRE/IP to classify 172.16.0.34->172.16.0.33 (len=128 type=0x800 ttl=254 tos=0x0)

    *Mar  1 01:01:56.403: Tunnel2: GRE/IP to decaps 172.16.0.34->172.16.0.33 (len=128 ttl=254)

    *Mar  1 01:01:56.407: Tunnel2: GRE decapsulated IP 10.20.0.1->192.168.123.1 (len=100, ttl=254)un all

    *Mar  1 01:01:58.083: Tunnel1: GRE/IP to classify 172.16.0.18->172.16.0.17 (len=104 type=0x800 ttl=254 tos=0xC0)

    *Mar  1 01:01:58.087: Tunnel1: GRE/IP to decaps 172.16.0.18->172.16.0.17 (len=104 ttl=254)

    *Mar  1 01:01:58.091: Tunnel1: GRE decapsulated IP 192.168.10.2->224.0.0.5 (len=80, ttl=1)

    All possible debugging has been turned off

    HUB#

    The following debug detail the process of the initial NHRP registration when SpokeB tunnel goes up:

    HUB#

    *Mar  1 01:15:02.575: NHRP: Receive Registration Request via Tunnel2 vrf 0, packet size: 81

    *Mar  1 01:15:02.579:  (F) afn: IPv4(1), type: IP(800), hop: 255, ver: 1

    *Mar  1 01:15:02.579:      shtl: 4(NSAP), sstl: 0(NSAP)

    *Mar  1 01:15:02.579:  (M) flags: “unique”, reqid: 5

    *Mar  1 01:15:02.583:      src NBMA: 172.16.0.34

    *Mar  1 01:15:02.583:      src protocol: 192.168.123.2, dst protocol: 192.168.123.1

    *Mar  1 01:15:02.587:  (C-1) code: no error(0)

    *Mar  1 01:15:02.591:        prefix: 255, mtu: 1514, hd_time: 7200

    *Mar  1 01:15:02.591:        addr_len: 0(NSAP), subaddr_len: 0(NSAP), proto_len: 0, pref: 0

    *Mar  1 01:15:02.595: NHRP: Send Registration Reply via Tunnel2 vrf 0, packet size: 101

    *Mar  1 01:15:02.599:  src: 192.168.123.1, dst: 192.168.123.2

    *Mar  1 01:15:02.603:  (F) afn: IPv4(1), type: IP(800), hop: 255, ver: 1

    *Mar  1 01:15:02.603:      shtl: 4(NSAP), sstl: 0(NSAP)

    *Mar  1 01:15:02.607:  (M) flags: “unique“, reqid: 5

    *Mar  1 01:15:02.607:      src NBMA: 172.16.0.34

    *Mar  1 01:15:02.611:      src protocol: 192.168.123.2, dst protocol: 192.168.123.1

    *Mar  1 01:15:02.615:  (C-1) code: no error(0)

    *Mar  1 01:15:02.615:        prefix: 255, mtu: 1514, hd_time: 7200

    *Mar  1 01:15:02.619:        addr_len: 0(NSAP), subaddr_len: 0(NSAP), proto_len: 0, pref: 0

    *Mar  1 01:15:03.659: %OSPF-5-ADJCHG: Process 10, Nbr 2.2.2.2 on Tunnel2 from LOADING to FULL, Loadi    ng Done

    HUB#

    And  here the process of NHRP request for resolution from SpokeC that wants to reach SpokeB private network:

    HUB#
    *Mar  1 01:21:51.131: NHRP: Receive Registration Request via Tunnel2 vrf 0, packet size: 81
    *Mar  1 01:21:51.135: NHRP: netid_in = 123, to_us = 0

    *Mar  1 01:21:51.135: NHRP: Finding next idb with in_pak id: 0

  6. Routing
  7. OSPF

    When using OSPF make sure that you announce networks through the tunnel interfaces and that the OSPF network mode is either broadcast or point-to-multipoint, not the default point-to-point.

  8. EIGRP

    With EIGRP make sure that you disabled split-horizon.

What is DMVPN ?


The complexity of DMVPN resides in the multitude of concepts involved in this technology: NHRP, mGRE, and IPSec. So to demystify the beast it crucial to enumerate the advantages, disadvantages, and conditions related to different NBMA topologies and their evolution.

Spokes with permanent public addresses

Hub and Spoke topology

Pro:

– Ease of configuration on spokes, only HUB parameters are configured and the HUB routes all traffic between spokes.

Con:

– Memory and CPU resource consumption on the HUB.

– Static configuration burdensome, prone to errors and hard to maintain for very large networks.

– Lack of scalability and flexibility.

– No security, network traffic is not protected.

Full/partial mesh topology

Pros:

– Each spoke is able to communicate with other spokes directly.

Cons:

– Static configuration burdensome, prone to errors and hard to maintain for very large networks.

– Lack of scalability and flexibility.

– Additional memory and CPU resource requirements on branch routers for just occasional and non-permanent spoke-to-spoke communications.

– No security, network traffic is not protected.

Point-to-point GRE

Pro:

– Lack of scalability: need static configuration between each spoke and the HUB in a HUB and Spoke topology; and between each pair of spokes in a full/partial mesh topology.

– GRE supports IP broadcast and multicast to the other end of the tunnel.

– GRE is a unicast protocol, so can be encapsulated using IPSec and provide routing/multicasting in a protected environment.

Cons:

– Lack of security.

Point-to-multipoint GRE

Pros:

– A single tunnel interface can terminate all GRE tunnels from all spokes.

– No configuration complexity and resolves the issue of memory allocations.

Cons:

– Lack of security.

Full/partial mesh topology + IPSec

Pros:

– Each spoke is able to communicate with other spokes directly.

– Security.

Cons:

– Static configuration burdensome, prone to errors and hard to maintain.

– Lack of scalability and flexibility.

– Additional memory and CPU resource requirements on branch routers for just occasional and non-permanent spoke-to-spoke communications.

– IPsec doesn’t support multicast/broadcast, so cannot deploy routing protocols.

– Need pre-configured access-list for interesting traffic that will trigger IPSec establishment so need manual intervention in case applications changes.

– IPSec establishment will take [1-10] seconds, hence packet drops in the beginning.

Hub and Spoke topology + IPSec

Pro:

– Ease of configuration on spokes, only HUB parameters are configured and the router routes all traffic between spokes.

Con:

– Memory and CPU resource consumption on the HUB.

– Static configuration burdensome, prone to errors and hard to maintain in very large networks.

– Lack of scalability and flexibility.

– IPSec doesn’t support multicast/broadcast, so cannot deploy routing protocols.

– IPSec needs pre-configured access-list for interesting traffic that will trigger IPSec establishment.

– IPSec establishment will take [1-10] seconds, so packet drops.

Point-to-point GRE + IPSec

Pro:

– Lack of scalability: need static configuration between each spoke and the HUB in a HUB and Spoke topology; and between each pair of spokes in a full/partial mesh topology.

– GRE supports IP broadcast and multicast to the other end of the tunnel.

– GRE is a unicast protocol, so can be encapsulated using IPSec and provide routing/multicasting in a protected environment.

– Security.

Cons:

– Need pre-configured access-list for interesting traffic that will trigger IPSec establishment so need manual intervention in case applications changes.

– IPSec establishment will take [1-10] seconds, hence packet drops in the beginning.

Point-to-multipoint GRE + IPSec

Pros:

– A single tunnel interface can terminate all GRE tunnels from all spokes.

– No configuration complexity and resolves the issue of memory allocations.

Cons:

– Need pre-configured access-list for interesting traffic that will trigger IPSec establishment so need manual intervention in case applications changes.

– IPSec establishment will take [1-10] seconds, hence packet drops in the beginning.

Spokes with dynamic public addresses

Issue:

Whether it is GRE, mGRE, Hub and Spoke, full mesh, on HUB or on spokes, tunnel establishment require pre-configured tunnel source and destination.

Here comes NHRP (Next-Hop Resolution Protocol).

NHRP is used by spokes when startup to provide the HUB with the dynamic public ip and the associated tunnel ip.

NHRP is used by the HUB to respond to spokes requests about each other public ip addresses.

So the overall solution will be Point-to-multipoint GRE + IPSec + NHRP which is called DMVPN (Dynamic Multipoint VPN).

You will find the previously mentioned topologies in the subcategory “DMVPN” of the category “Security”

I reserved a separated sub-category called “DMVPN” inside the parent category “Security” in which  I will post the previously mentioned topologies.

Building DMVPN with mGRE, NHRP and IPSec VPN


 I – OVERVIEW

This lab will treat the design and deployment of dynamic multipoint VPN architectures by moving step by step into the configuration and explaining how mGRE (multipoint Generic Router Encapsulation), NHRP (Next-Hop Resolution Protocol) and IPsec VPN are mixed to build a dynamic secure topology over the Internet for large enterprises with hundreds of sites.

Figure1: physical Topology


 

 

A hypothetical enterprise (figure1) with a central office (HUB) and several branch office sites (spokes) connecting over the Internet is facing a rapid business grow and more and more direct connections between branch offices are needed.

Spokes are spread over different places where it is not always possible to afford a static public addresses therefore company needs more scalable method than a simple Hub and spoke with point-to-point tunneling or a full mesh topology where the administration tasks of the IPSec traffic become extremely burdensome.

 

II – DEPLOYMENT

Figure2: Address scheme


 

The DMVPN model provides a scalable configuration for a dynamic-mesh VPN with the only static relationships configured are those between spokes and the HUB.

DMVPN concepts include several components like mGRE, NHRP and IPSec Proxy instantiation that will be explained during this lab.

 

II-1 Address scheme:

Each site has its own private address space 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24 and 10.0.4.0/24 for central site, site1, site2 and site3 respectively.

Each spoke router obtains its public IP address dynamically from the ISP; only the HUB has a static permanent public IP.

Table1: address scheme

Network 

Addresses 

OSPF Area 10 – central site  10.0.1.0/24 
OSPF Area 11 – site 1  10.0.2.0/24 
OSPF Area 12 – site 2  10.0.3.0/24 
OSPF Area 13 – site 3 10.0.4.0/24 
Multipoint GRE network  172.16.0.128/26 
Area 0 link subnet between HUB and Internet  192.168.0.16/30 
Area 0 link between SPOKEA and Internet  192.168.0.12/30 
Area 0 link between SPOKEB and Internet  192.168.0.8/30 
Area 0 link between SPOKEC and Internet 192.168.0.4/30 

 

II-2 Multipoint Generic Router Encapsulation – mGRE:

Table2: Tunnel configuration guideline

Router HUB 
Interface tunnel 0 

Ip /mask 

172.16.0.129/28 

Source ip 

S1/0 

Dest ip 

 

Tunnel type 

GRE multipoint 

 
Router SPOKEA 
Interface tunnel 0 

Ip /mask 

172.16.0.130/28 

Source ip 

S1/0 

Dest ip 

 

Tunnel type 

GRE multipoint 

 
Router SPOKEB 
Interface tunnel 0 

Ip /mask 

172.16.0.131/28 

Source ip 

S1/0 

Dest ip 

 

Tunnel type 

GRE multipoint 

 
Router SPOKEC
Interface tunnel 0 

Ip /mask 

172.16.0.132/28 

Source ip 

S1/0 

Dest ip 

 

Tunnel type 

GRE multipoint 

 

A Hub n Spoke point-to-point GRE would be a heavy burden for the administration of the topology because all spokes IP addresses must be known in advance.

Moreover such traditional solutions consume lot of CPU and memory resources because each tunnel will create its own IDB (Interface Descriptor Block).

The alternative for point-to-point GRE will be multipoint GRE where a single interface will terminate all spokes GRE tunnels and will consume a single IDB and conserves interface memory structures and interface process management on the HUB.

With point-to-multipoint GRE the tunnel source should be public routable, in our case we set mGRE packet to take serial 1/0 as an output interface to the SP network, the tunnel destination address is dynamically assigned.

Tunnel source and destination form the outer IP header that will carry the encapsulated traffic throughout the physical topology.

All tunnels participating to mGRE network are identified by a tunnel key ID and belongs to the same subnet 172.16.0.129/26, from the private address space and routable only inside the company’s network.

HUB:

interface Tunnel0

ip address 172.16.0.129 255.255.255.192

!!The mGRE packet will be encapsulated out of the physical interface

tunnel source Serial1/0

!!Enable mGRE

tunnel mode gre multipoint

!! Tunnel identification key, match the NHRP network-id

tunnel key 1

SPOKEA :

interface Tunnel0

ip address 172.16.0.130 255.255.255.192

tunnel source Serial1/0

tunnel mode gre multipoint

tunnel key 1

SPOKEB :

interface Tunnel0

ip address 172.16.0.131 255.255.255.192

tunnel source Serial1/0

tunnel mode gre multipoint

tunnel key 1

SPOKEC :

interface Tunnel0

ip address 172.16.0.132 255.255.255.192

tunnel source Serial1/0

tunnel mode gre multipoint

tunnel key 1

So far, all tunnel protocol state still down, because tunnels destination addresses are unknown, so unreachable.

The HUB router knows that traffic destined to 172.16.0.128/26 will be forwarded to the serial interface (tunnel source) to be encapsulated into the needed mGRE tunnel and all other traffic will be reachable through the next hop physical interface 192.168.0.18, therefore no particular information about spokes on the HUB except that they belong to 172.16.0.128/26.

 

II-3 Static routing:

HUB:

ip route 0.0.0.0 0.0.0.0 192.168.0.18

ip route 172.16.0.0 255.255.255.192 Serial1/0

As opposed to the HUB, all spokes know the HUB physical IP address 192.168.0.17 and how to reach it statically (through s1/0). All other traffic will be forwarded to the next hop physical interface IP address.

On SPOKEA:

!! Forward all traffic to the ISP next-hop destination

ip route 0.0.0.0 0.0.0.0 192.168.0.14

!! The HUB physical interface is reachable through serial1/0

ip route 192.168.0.17 255.255.255.255 Serial1/0

On SPOKEB:

ip route 0.0.0.0 0.0.0.0 192.168.0.10

ip route 192.168.0.17 255.255.255.255 Serial1/0

On SPOKEC:

ip route 0.0.0.0 0.0.0.0 192.168.0.5

ip route 192.168.0.17 255.255.255.255 Serial1/0

 

II-4 Dynamic routing:

Figure3: routing topology


Each spoke protects a set of private subnet, not known by others; all other routers must be informed of these subnets.

With distance vector protocols like RIP or EIGRP the specification of “no-split horizon” is required for the updates to be sent back to the mGRE interface to other spokes.

With Link State protocols like OSPF the appropriate next-hop is automatically reflected within the subnet.

Typically OSPF is configured in point-to-multipoint mode on mGRE interfaces, this cause spokes to install routes with the HUB as next-hop, which negates DMVPN network topology concept, for that reason DMVPN cloud must be treated as broadcast.

 

NBMA – Non Broadcast Multiple Access are networks where multiple hosts/devices are connected, but data is sent directly to the destination over a virtual circuit or switching fabric like ATM, FR or X.25.

Broadcast networks – All hosts connected to the network listen to the media but only the host/device to which the communication is intended will receive the frame.

 

Cisco prefer Distance Vector protocols and recommend using EIGRP for large scale deployments.

Spokes OSPF priorities are set to 0 to force them in DROTHER mode and never become DR/BDR, whereas the HUB priority is set to a higher value “10” for instance.

All routers must be configured to announce the mGRE subnet 172.16.0/128 as area 0 and 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24 and 10.0.4.0/24 as OSPF non-backbone area10, area11, area12 and area13 respectively.

Note that the routing protocol is dealing only with mGRE logical topology.

HUB:

interface Tunnel0

ip ospf network broadcast

ip ospf priority 10

!

router ospf 10

router-id 1.1.0.1

!! Announce local networks.

network 10.0.1.0 0.0.0.255 area 10

!! Announce mGRE network as OSPF backbone

network 172.16.0.128 0.0.0.63 area 0

SPOKEA:

interface Tunnel0

ip ospf network broadcast

ip ospf priority 0

!

router ospf 10

router-id 1.1.1.1

network 10.0.2.0 0.0.0.255 area 11

network 172.16.0.128 0.0.0.63 area 0

SPOKEB:

interface Tunnel0

ip ospf network broadcast

ip ospf priority 0

!

router ospf 10

router-id 1.1.2.1

network 10.0.3.0 0.0.0.255 area 12

network 172.16.0.128 0.0.0.63 area 0

SPOKEC:

interface Tunnel0

ip ospf network broadcast

ip ospf priority 0

!

router ospf 10

router-id 1.1.3.1

network 10.0.4.0 0.0.0.255 area 13

network 172.16.0.128 0.0.0.63 area 0

 

II-5 Next Hop Resolution Protocol – NHRP:

NHRP role is to discover address of other routers/hosts connected to a NBMA. Generally NBMA networks require tedious static configuration (static mapping between network layer address).

NHRP provides ARP like resolution and dynamically learn addresses of devices connected to the same NBMA networks from a next-hop server and then directly communicate them without passing through intermediate systems like in traditional Hub and Spoke topologies.

In our case the NHRP will facilitate building GRE tunnels. The HUB will maintain NHRP database of spokes mGRE virtual addresses associated to their public interface addresses. Each spoke registers its public address when it boots and queries the NHRP database for other spokes IP when needed.

On each tunnel interface, NHRP must be enabled and identified, it is recommended that the NHRP network-id match tunnel key.

NHRP participants can optionally be authenticated.

Because NHRP is a client-server protocol, the server (HUB) doesn’t need to know clients, nevertheless clients have to know the server.

Spokes explicitly set HUB as the next-hop server and statically tie the HUB virtual tunnel IP to the HUB physical interface s1/0. In the same manner spokes must map multicast forwarding to the HUB virtual mGRE IP address.

The same configuration is required in the HUB site but without any mapping or next-hop servers just map multicast forwarding to any new dynamically created NHRP adjacency.

HUB:

interface Tunnel0

ip nhrp authentication cisco

ip nhrp map multicast dynamic

ip nhrp network-id 1

ip nhrp cache non-authoritative

ip ospf network broadcast

SPOKEA:

interface Tunnel0

ip nhrp authentication cisco

ip nhrp map multicast 192.168.0.17

ip nhrp map 172.16.0.129 192.168.0.17

ip nhrp network-id 1

ip nhrp nhs 172.16.0.129

SPOKEB:

interface Tunnel0

ip nhrp authentication cisco

ip nhrp map multicast 192.168.0.17

ip nhrp map 172.16.0.129 192.168.0.17

ip nhrp network-id 1

ip nhrp nhs 172.16.0.129

SPOKEC:

interface Tunnel0

ip nhrp authentication cisco

ip nhrp map multicast 192.168.0.17

ip nhrp map 172.16.0.129 192.168.0.17

ip nhrp network-id 1

ip nhrp nhs 172.16.0.129

 

II-6 IPSec VPN:

The only thing to do after setting IPSec phase 1 and phase 2 parameters is to define an IPSec profile and assign it to the mGRE tunnel interface.

Table3: IPSec parameters

PHASE 1 
IKE parameters 
 
IKE seq 1  
Auth. (def. rsa-sig)  pre-shared 
Encr. (def. des)  3des 
DH (def. group1) 
Hash (def. sha)  sha 
Lifetime (def. 86400)  86400 
Preshared  Key  cisco 
Addr.  – 

PHASE 2 

Profile  IPSecprofile 
Transform set  ESP-3DES-SHA  esp-3des  esp-sha-hmac  – 
Mode Transport 

 

!!IKE phase1 – ISAKMP

crypto isakmp policy 1

!! Symmetric key algorithm for data encryption

encr 3des

!! authentication type using static key between participants

authentication pre-share

!! asymmetric algorithm for establishing shared symmetric key across the network

group 2

!! preshared key and peer (dynamic)

crypto isakmp key cisco address 0.0.0.0 0.0.0.0

 

!!IKE phase2 – IPSEec

crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac

mode transport

 

crypto ipsec profile IPSecProfile

set transform-set ESP-3DES-SHA

 

interface Tunnel0

!!encapsulate IPSec inside mGRE

tunnel protection ipsec profile IPSecProfile

 

III – MONITORING

HUB:

First of all, Make sure those s1/0 and tunnel interfaces are up/up, otherwise review your static routing statements.

HUB#sh ip int brief

Interface IP-Address OK? Method Status Protocol

FastEthernet0/0 unassigned YES NVRAM administratively down down

Serial1/0 192.168.0.17 YES NVRAM up up

Serial1/1 unassigned YES NVRAM administratively down down

Serial1/2 unassigned YES NVRAM administratively down down

Serial1/3 unassigned YES NVRAM administratively down down

Loopback0 10.0.1.1 YES NVRAM up up

Loopback1 1.1.0.1 YES manual up up

Tunnel0 172.16.0.129 YES NVRAM up up

HUB#

After spokes have registered to the HUB through NHRP protocol, the HUB have dynamically learned IP addresses of all participants of the mGRE (logical topology) and know how to reach them through the physical topology.

Spokes mGRE tunnel IP addresses are mapped to their physical public routable IP’s.

HUB#sh ip nhrp

172.16.0.130/32 via 172.16.0.130, Tunnel0 created 02:40:11, expire 01:22:18

Type: dynamic, Flags: unique registered

NBMA address: 192.168.0.13

172.16.0.131/32 via 172.16.0.131, Tunnel0 created 02:40:14, expire 01:21:45

Type: dynamic, Flags: unique registered

NBMA address: 192.168.0.9

172.16.0.132/32 via 172.16.0.132, Tunnel0 created 02:40:14, expire 01:21:54

Type: dynamic, Flags: unique registered

NBMA address: 192.168.0.6

HUB#

The HUB establishes mGRE tunnel with all spokes and initiate OSPF neighbor relationship with each one of them.

All routers will exchange routing information through the HUB.

HUB#sh ip ospf neigh

 

Neighbor ID Pri State Dead Time Address Interface

1.1.1.1 0 FULL/DROTHER 00:00:32 172.16.0.130 Tunnel0

1.1.2.1 0 FULL/DROTHER 00:00:35 172.16.0.131 Tunnel0

1.1.3.1 0 FULL/DROTHER 00:00:32 172.16.0.132 Tunnel0

HUB#

Now the HUB knows about all spokes local networks.

HUB#sh ip route

Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP

D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is 192.168.0.18 to network 0.0.0.0

 

1.0.0.0/32 is subnetted, 1 subnets

C 1.1.0.1 is directly connected, Loopback1

172.16.0.0/26 is subnetted, 2 subnets

C 172.16.0.128 is directly connected, Tunnel0

S 172.16.0.0 is directly connected, Serial1/0

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O IA 10.0.3.1/32 [110/11112] via 172.16.0.131, 02:16:51, Tunnel0

O IA 10.0.2.1/32 [110/11112] via 172.16.0.130, 02:16:51, Tunnel0

C 10.0.1.0/24 is directly connected, Loopback0

O IA 10.0.4.1/32 [110/11112] via 172.16.0.132, 02:16:51, Tunnel0

192.168.0.0/30 is subnetted, 1 subnets

C 192.168.0.16 is directly connected, Serial1/0

S* 0.0.0.0/0 [1/0] via 192.168.0.18

HUB#

 

HUB#ping

Protocol [ip]:

Target IP address: 10.0.2.1

Repeat count [5]:

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Source address or interface: 10.0.1.1

Type of service [0]:

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.0.2.1, timeout is 2 seconds:

Packet sent with a source address of 10.0.1.1

!!!!!

Success rate is 40 percent (2/5), round-trip min/avg/max = 80/116/152 ms

HUB#

 

HUB#ping

Protocol [ip]:

Target IP address: 10.0.4.1

Repeat count [5]:

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Source address or interface: 10.0.1.1

Type of service [0]:

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.0.4.1, timeout is 2 seconds:

Packet sent with a source address of 10.0.1.1

!!!!!

Success rate is 60 percent (3/5), round-trip min/avg/max = 104/128/144 ms

HUB#

 

SPOKEA:

Routing information is exchanged between all routers via the HUB, this is the only OSPF neighbor relationship spokes establish.

SPOKEA#sh ip ospf neigh

 

Neighbor ID Pri State Dead Time Address Interface

1.1.0.1 1 FULL/DR 00:00:32 172.16.0.129 Tunnel0

SPOKEA#

Nevertheless, advertised networks are directly reachable through the router that announced them.

SPOKEA#sh ip route


Gateway of last resort is 192.168.0.14 to network 0.0.0.0

 

1.0.0.0/32 is subnetted, 1 subnets

C 1.1.1.1 is directly connected, Loopback1

172.16.0.0/26 is subnetted, 1 subnets

C 172.16.0.128 is directly connected, Tunnel0

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O IA 10.0.3.1/32 [110/11112] via 172.16.0.131, 00:01:25, Tunnel0

C 10.0.2.0/24 is directly connected, Loopback0

O IA 10.0.1.1/32 [110/11112] via 172.16.0.129, 00:01:25, Tunnel0

O IA 10.0.4.1/32 [110/11112] via 172.16.0.132, 00:01:25, Tunnel0

192.168.0.0/24 is variably subnetted, 2 subnets, 2 masks

C 192.168.0.12/30 is directly connected, Serial1/0

S 192.168.0.17/32 is directly connected, Serial1/0

S* 0.0.0.0/0 [1/0] via 192.168.0.14

SPOKEA#

Spoke A knows about the next-hop server, the HUB, through a static map statement and will register by announcing to the HUB its physical public routable IP address assigned to its virtual mGRE IP.

SPOKEA#sh ip nhrp

172.16.0.129/32 via 172.16.0.129, Tunnel0 created 00:02:03, never expire

Type: static, Flags: authoritative used

NBMA address: 192.168.0.17

 

When spokeA wants to communicate with 10.0.4.1/32 it inspects the routing table and will find that 10.0.4.1 is reachable though the next-hop mGRE address 172.6.0.129.

 

SpokeA will ask the HUB about the public IP address that corresponds to 172.16.0.129, the HUB will reply with 192.168.0.6; only then, the spoke will send the traffic directly to SPOKEC.

 

According to the outputs of “debug tunnel” and a “traceroute” commands on spokeB, you can note that the next-hop of the mGRE network is the final destination and there is no intermediate hops, this is confirmed by the delivery through the physical topology where mGRE traffic is encapsulated and sent directly from 192.168.0.6 to 192.168.0.9 with no intermediate hops.

 

SPOKEB#debug tunnel

Tunnel Interface debugging is on

SPOKEB#traceroute 172.16.0.132

 

Type escape sequence to abort.

Tracing the route to 172.16.0.132

 

1 172.16.0.132 80 msec

*Mar 1 00:03:55.839: Tunnel0: GRE/IP to decaps 192.168.0.6->192.168.0.9 (len=84 ttl=253)

*Mar 1 00:03:55.843: Tunnel0: GRE decapsulated IP 172.16.0.132->172.16.0.131 (len=56, ttl=255) * 44 msec

SPOKEB#

 

Because DMVPN involves several concepts multipoint GRE, NHRP, dynamic routing and IPSec, troubleshooting issues can be very time consuming, so the best practice is to focus on each step and make sure things work well before configuring the next step.

%d bloggers like this: