GET VPN, it is all about group.


GET VPN uses a group security paradigm comparing to the traditional point-to-point security paradigm like DMVPN, GRE IPSec or SSL. Do not confuse with any-to-any mesh which is the result of n(n-1)/2 point-to-point security associations between n peers.

We are talking about group security association (SA), group states and group keys.

Because each group member doesn’t know in advance about other group members, a central entity named key server (KS) need to manage GET VPN control plane: group member registration, distribute group keys and do the rekeys (updating security policy and group keys).

Once a group member has registered with the KS to a group, it can take care of the data plane encrypted communication with any of the other group members using the received group key and group security policy.

GET VPN doesn’t create the VPN perimeter (like Frame Relay for example), all group members and the key server are supposed to be already in the same VPN.

This lab is not supposed to be a comprehensive GET-VPN reference. Because it is all about a stories and process dynamics, I opted for a more visual approach: less words, more pictures.

I included a table summarizing the key characteristics of GET-VPN comparing with traditional IPSec, followed by a step-by-step animation of the main processes of GET-VPN.

Only then, I provide some configurations of the core MPLS-VPN and GET-VPN key server and group member.

And finally the offline lab with a myriad of useful commands collected during a running session.

By rushing to the CLI, you can easily drown yourself in a cup of water 🙂 Better take the time to understand the overall process, then configuration and troubleshooting will be straightforward and more instructive.

Outline:

  • Comparison of GETVPN vs traditional IPSec.
  • Summary of the main concepts
  • Step-by-step GETVPN process
  • Lab details
    • MPLS-VPN configuration
      • Core
      • PE_CE
    • GET VPN configuration
      • Basic Group member template
      • Basic key server
  • Offline lab
  • Troubleshooting

Comparison table (IPSec <> GET vpn)

IPSec Crypto GET Crypto
Main concept Per link point-to-point security paradigm: each pair of devices establishes a separate security association.(figure3) Group-based security paradigm: allows a set of network entities to connect through a single group security association.
Encapsulation Requires the establishment of tunnel to secure the traffic between encrypting gateways.- a new header is added to the packet (figure4).- Limited QoS preservation (only ToS). Tuneless: does not use tunnels and preserves the original src/dst IP in the header of the encrypted packet for optimal routing (figure6).- Preserve same established routing path.- Preserve entire QoS parameter set.
Multicast – Traditional IPSec encapsulate multicast into encrypted PTP unicast GRE tunnels.- Alternatively, inefficient multicast replication at the HUB using DMVPN with many multicast deployment restrictions (RP, MA and multicast source location at the HUB). GET VPN is particularly suited for multicast traffic encryption with native support. The WAN multicast infrastructure is used to transport encrypted GET-VPN multicast traffic.
Infrastructure IPSec creates Overlay VPN and secures it over the Internet. • Relies on MPLS/VPLS Any-to-Any VPN.• Direct Flows between all CE.• Leverages Existing VPNs.- GET VPN suited for enterprises running over MPLS private networks.- GET-VPN ONLY secure an already established VPN (MPLS-VPN), it doesn’t create it
Management – Per-link security management: Each pair holds a security association for each other.- Resource consumption.- Alternative centralized HUB-based management with DMVPN though heterogeneous protocol suite (mGRE, NHRP)- Less scalable. – Full mesh direct communications between sites.Same group members hold a single security association; don’t care about others on the group.- Relies on centralized membership management through key servers :A centralized entity (Key Server manage the control plane) and not involved in the data plane communication.Data plane communications don’t require transport through a hub HUB entity.- Low latency/jitter.

– More scalable.

Security Perimeter Per-link All Sites within the same group.
Routing Two routing levels(figure5a):Core routing+Overlay routing between encrypting gateways Single end-to-end routing level with MPLS-VPN as the underlying VPN infrastructure(figure6a).
Traditional IPSec PTP SAs between GMs

Figure3- Traditional IPSec PTP SAs between GMs

Figure4- Traditional IPSec PTP Packet

Figure4- Traditional IPSec PTP Packet

 

Figure5a-Traditional IPSec routing overlay

Figure5a-Traditional IPSec routing overlay

 

Figure6- Get-VPN IPSec PTP Packet + TEK

Figure6- Get-VPN IPSec PTP Packet + TEK

 

Figure6a- GETVPN end-to-end routing

Figure6a- GETVPN end-to-end routing

 

 Summary of the main concepts

The key server manages the control operations (group member registration and rekey and policy distribution) on a different plane, initially, under the protection of IKE, after that, protected by a common key (KEK) (figure7/8).

Rekey process can be done using unicast (default) or multicast.

Any-to-any GM communications are performed on a separate data plane using the received TEK (Traffic Encryption Key) (figure9).

COOP (Co-operative key server protocol)

  • A Key Server redundancy feature allowing key server communication to synchronize the keying information.

GDOI protocol (Group Domain of Interpretation)

  • Used for policy & key deployment with group members.

Make sure to differentiate between registration vs rekey and between authentication vs authorization processes.

A-Registration process:

1- There is two types of KS-GM authentications):

  • Shared key (deployed on the lab): a simple secret key shared (hopefully out-of-band) by the KS and all GMs (or per-pair).
  • RSA (CA-based authentication): uses asymmetric encryption with public/private key pair and requires PKI infrastructure:
    • Enrollment phase:
      • Client (GM) gets CA certificate (CA public key signed with CA private key)
      • Client enrollment with CA
        • Client sends its own certificate (client public key signed with client private key)
        • CA signs the client certificate with its own (CA) public key and sends it to the client.
    • Peer-to-peer (GM-to-KS) authentication:
      • Peers exchanges and verify each other signatures using CA public key.

2- Authorization (during registration):

  • KS verifies the group id number of the GM: If this id number is a valid and the GM has provided valid Internet Key Exchange (IKE) credentials, the key server sends the SA policy and the Keys to the group member.

B-Rekey process (after registration): uses asymmetric encryption with public/private key pair:

  • The Primary KS uses the private & public keys (asymmetric) generated before the registration and export them to other secondary KSs.
  • Primary KS generates Group control / data plane keys, TEK and KEK (symmetric keys)
  • The KS signs TEK/KEK + policy with its own private key and sends them to GMs(figure7/8/9)
  • The KS distributes public key to all GMs.
  • GMs check the rekeys with the received KS public key

 

Figure7- Control plane: GM registration

Figure7- Control plane: GM registration

Figure8- Control plane: rekey using KEK key

Figure8- Control plane: rekey using KEK key

 

Figure9- GM data plane communications using TEK key

Figure9- GM data plane communications using TEK key

 

 

Step-by-step GET-VPN process

Lab details

Here is the lab topology, pretty straight forward, three group members and one key server with MPLS-VPN as the underlying VPN infrastructure.

 

Figure1- High-level lab topology

Figure1- High-level lab topology

Figure2- Low-level lab topology

Figure2- Low-level lab topology

 

GET VPN:

  • Customer equipments (CE) R9, R10 and R12 play the role of GETVPN group members (GM),
  • R11 plays the role of the key server.

MPLS_VPN:

  • R2, R3 and R5 are the MPLS-VPN PEs (Provider Edge equipment)

MPLS-VPN configuration:

Make sure the underlying MPLS-VPN topology works fine and all connectivity are successful before touching GET VPN.

For the sake of simplicity, a single MPLS core router (label switching router) R4 is used and static routing between CE_PE.

 

Figure2- Low-level MPLS-VPN lab infrastructure (MPLS LSR + PEs)

Figure2- Low-level MPLS-VPN lab infrastructure (MPLS LSR + PEs)

 

MPLS-VPN basic configuration with PE-CE static routing

  • R4 the MPLS core router (LSR) is configured as BGP route reflector.
  • Make sure PE equipments (wan interfaces and loopback rid’s) are reachable though MPLS core OSPF.
  • Enable MPLS on the interfaces facing PE’s.
  • Each PE (R2, R3 and R5) has similar configuration and a template can be used to scale client numbers.

Because this lab is not focused on MPLS-VPN, a simple PE-CE static routing is deployed. Each client has its interface and static route in the appropriate VRF. Generally multiple clients (VRFs) separately coexist in a single PE and the core transport is done through vpnv4 to and from other PE’s (figure2a)

 

Figure2a- MPLS-VPN PE Virtual Router Forwarding

Figure2a- MPLS-VPN PE Virtual Router Forwarding

With PE-CE static routing, only one-side redistribution is done, from static to vpnv4 address family.

You will find all details in the offline lab.

R4 (MPLS LSR router)

interface Loopback0ip address 10.0.0.4 255.255.255.255ip ospf 2345 area 0!

interface FastEthernet0/0

ip address 192.168.45.4 255.255.255.0

ip ospf 2345 area 0

mpls label protocol ldp

mpls ip

!

interface FastEthernet0/1

ip address 192.168.24.4 255.255.255.0

ip ospf 2345 area 0

mpls label protocol ldp

mpls ip

!

interface FastEthernet1/0

ip address 192.168.34.4 255.255.255.0

ip ospf 2345 area 0

mpls label protocol ldp

mpls ip

!

router ospf 2345

!

router bgp 2345

no synchronization

bgp router-id 4.4.4.4

network 192.168.24.0

network 192.168.34.0

network 192.168.45.0

neighbor 10.0.0.2 remote-as 2345

neighbor 10.0.0.2 route-reflector-client

neighbor 10.0.0.3 remote-as 2345

neighbor 10.0.0.3 route-reflector-client

neighbor 10.0.0.5 remote-as 2345

neighbor 10.0.0.5 route-reflector-client

no auto-summary

!

address-family vpnv4

neighbor 10.0.0.2 activate

neighbor 10.0.0.2 send-community extended

neighbor 10.0.0.2 route-reflector-client

neighbor 10.0.0.3 activate

neighbor 10.0.0.3 send-community extended

neighbor 10.0.0.3 route-reflector-client

neighbor 10.0.0.5 activate

neighbor 10.0.0.5 send-community extended

neighbor 10.0.0.5 route-reflector-client

exit-address-family

R4#sh ip bgp summBGP router identifier 4.4.4.4, local AS number 2345…10.0.0.2 4 2345 134 137 4 0 0 02:09:53 0

10.0.0.3 4 2345 134 137 4 0 0 02:09:43 0

10.0.0.5 4 2345 136 137 4 0 0 02:09:52 0

R4#

GET-VPN configuration:

Key server:

crypto isakmp policy 1encr aesauthentication pre-sharegroup 2!crypto isakmp key cisco address 192.168.103.10

crypto isakmp key cisco address 192.168.29.9

crypto isakmp key cisco address 192.168.125.12

!

crypto ipsec transform-set transform128 esp-aes esp-sha-hmac

!

crypto ipsec profile profile1

set security-association lifetime seconds 7200

set transform-set transform128

!

crypto gdoi group gdoi-group1

identity number 91011

server local

rekey algorithm aes 128

rekey retransmit 10 number 2

rekey authentication mypubkey rsa rekey-rsa

rekey transport unicast

sa ipsec 1

profile profile1

match address ipv4 getvpn-acl

address ipv4 192.168.115.11

!

interface FastEthernet0/0

ip address 192.168.115.11 255.255.255.0

!

ip access-list extended getvpn-acl

deny icmp any any

deny udp any eq 848 any eq 848

deny tcp any any eq 22

deny tcp any eq 22 any

deny tcp any any eq tacacs

deny tcp any eq tacacs any

deny tcp any any eq bgp

deny tcp any eq bgp any

deny ospf any any

deny eigrp any any

deny udp any any eq ntp

deny udp any eq ntp any

deny udp any any eq snmp

deny udp any eq snmp any

deny udp any any eq snmptrap

deny udp any eq snmptrap any

deny udp any any eq syslog

deny udp any eq syslog any

permit ip any any

You can recognize usual IPSec components: crypto policy, shared keys, transform-sets wrapped in an ipsec profile, followed by GETVPN GDOI group policy ( ipsec profile and group policy (group-id, symm. encr. Algorithm, asymmetric keys to use during rekey authentication and getvpn acl)

The GETVPN ACL defines what will not be encrypted; consider your own topology needs:

  • Exclude already encrypted traffic (https, ssh, etc.)
  • Exclude PE-CE routing protocols.
  • Exclude GETVPN control plane protocols( gdoi udp 848, tacacs, radius, etc.)
  • Exclude Management/monitoring protocols (snmp. syslog, device remote access, icmp, etc.)

Group member template:

crypto isakmp policy 1encr aesauthentication pre-sharegroup 2crypto isakmp key cisco address 192.168.115.11!

crypto gdoi group gdoi-group1

identity number 91011

server address ipv4 192.168.115.11

!

crypto map map-group1 10 gdoi

set group gdoi-group1

!

interface FastEthernet0/0

crypto map map-group1

All control plane parameters are controlled by the KS, GMs are dumb devices. All they have to know is how to initiate IKE phase1 for preshared or RSA authentication with the KSs to register and get the keys, which groups they want to join, to which KS they will talk (if many configured) and finally, the get crypto-map assigned to the outbound interface.

Troubleshooting:

When practicing or during tenacious data plane issues, a good idea is to use ESP-NULL, the null encryption, where the encryption process is engaged, but packets are not encrypted.

This will make much easier the packet content inspection and marking between GMs.

To facilitate the deployment and the troubleshooting, think about the topology in terms of separate technology sub-domains; make sure everything works fine before moving to the next sub-domain.

  • Layer2 MPLS
  • MPLS-VPN
  • Core routing
  • PE-CE routing
  • Core SSM multicast (if not unicast rekey)
  • CA infrastructure (if you are not using shared keys to authenticate GMs to the KS)
Possible issues Possible causes
key servers doesn’t exchange all announcements Bad buffer – default buffer is not enough for large nbr of gm
Reassembly timed out Some announcement fragments are dropped or lostICMP (packet too big) denied in the pathNo fragmentation because df bit setPresence of MTU bottlenecks, multipaths, firewalls.Not tuned tcp mss “ip tcp adjust-mss”
ike failure- failed negotiation Preshared key mismatchIKE parameters mismatchTransport issues between GMs and KSs
Nothing works just after GM registration. GETVPN acl policy cause routing traffic encryption: traffic not symmetrically excluded
KS not synchronizing Check KS-KS transport: avoid making KS-KS communications dependent on GET-VPN successful transport.Keys not exported or not imported
Rekey not delivered Issue with multicast infrastructure.Issue with mVPN service.

For more detailed GETVPN troubleshooting, I urge you to watch the excellent CiscoLive session by Wen Zhang https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=78699&tclass=popup

Offline lab:

References:

http://www.cisco.com/c/en/us/products/security/group-encrypted-transport-vpn/index.html

http://www.cisco.com/c/en/us/products/collateral/security/group-encrypted-transport-vpn/deployment_guide_c07_554713.html

https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=78699&tclass=popup

http://www.cisco.com/c/dam/en/us/td/docs/solutions/CVD/Aug2013/CVD-GETVPNDesignGuide-AUG13.pdf

https://www.ciscolive.com/online/connect/sessionDetail.ww?SESSION_ID=7907&backBtn=true

http://www.cisco.com/c/en/us/td/docs/ios/12_4t/12_4t11/htgetvpn.html

http://tools.ietf.org/html/rfc3547

http://tools.ietf.org/rfc/rfc4534.txt

IPv6 multicast over IPv6 IPSec VTI


IPv4 IPSec doesn’t support multicast, we need to use GRE (unicast) to encapsulate multicast traffic and encrypt it. As a consequence, more complication and an additional level of routing, so less performance.

One of the advantages of IPv6 is the support of IPSec authentication and encryption (AH, ESP) right in the extension headers, which makes it natively support IPv6 multicast.

In this lab we will be using IPv6 IPSec site-to-site protection using VTI to natively support IPv6 multicast.

The configuration involves three topics: IPv6 routing, IPv6 IPSec and IPv6 multicast. Each process is built on the top the previous one, so before touching IPsec, make sure you have local connectivity for each segment of the network and complete reachability through IPv6 routing.

Next step, you can move to IPv6 IPSec and change routing configuration accordingly (through VTI).

IPv6 multicast relies on a solid foundation of unicast reachability, so once you have routes exchanged between the two sides through the secure connection you can start configuring IPv6 multicast (BSR, RP, client and server simulation).

Picture1: Lab topology

IPv6 multicast over IPv6 IPSec VTI

Lab outline

  • Routing
    • OSPFv3
    • EIGRP for IPv6
  • IPv6 IPSec
    • Using IPv6 IPSec VTI
    • Using OSPFv3 IPSec security feature
  • IPv6 Multicast
    • IPv6 PIM BSR
  • Offline lab
  • Troubleshooting cases
  • Performance testing

Routing

Note:
IPv6 Routing relies on link-local addresses, so for troubleshooting purpose, link-local IPs are configured to be similar to their respective global addresses, so they are easily recognisable. This will be of a tremendous help during troubleshooting. Otherwise you will find yourself trying to decode the matrix : )

OSPFv3

Needs an interface configured with IPv4 address for Router-id

OSPFv3 offloads security to IPv6 native IPv6, so you can secure OSPFv3 communications on purpose: per- interface or per-area basis.
  Table1: OSPFv3 configuration

  R2 R1
IPv6 routing processes need IPv4-format router ids ipv6 router ospf 12
router-id 2.2.2.2
 ipv6 router ospf 12
router-id 1.1.1.1
Announce respective LAN interfaces interface FastEthernet0/1
ipv6 ospf 12 area 22
interface FastEthernet0/1
ipv6 ospf 12 area 11 
Disable routing on the physical BTB connection to avoid RPF failure interface FastEthernet0/0
 ipv6 ospf 12 … 
interface FastEthernet0/0
 ipv6 ospf 12 …
IPv6 gateways exchange routes through the VTI encrypted interface interface Tunnel12
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
interface Tunnel12
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
Set the ospf network type on loopback interfaces if you want to advertise masks other that 128-length interface Loopback0
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
interface Loopback0
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
Table2: EIGRP for IPv6 configuration
  R2 R1
IPv6 routing processes need IPv4-format router ids ipv6 router eigrp 12
eigrp router-id 2.2.2.2
ipv6 router eigrp 12
eigrp router-id 1.1.1.1
Announce respective LAN interfaces interface FastEthernet0/1
ipv6 eigrp 12
interface FastEthernet0/1
ipv6 eigrp 12
Disable routing on the physical BTB connection to avoid RPF failure interface FastEthernet0/0
ipv6 eigrp 12
interface FastEthernet0/0
ipv6 eigrp 12
IPv6 gateways exchange routes through the VTI encrypted interface interface Tunnel12
ipv6 eigrp 12
interface Tunnel12
ipv6 eigrp 12
Set the ospf network type on loopback interfaces if you want to advertise masks other that 128-length interface Loopback0
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
interface Loopback0
ipv6 ospf network point-to-point
ipv6 ospf 12 area 0
Enable EIGRP process ipv6 router eigrp 12
no shutdown
ipv6 router eigrp 12
no shutdown

In case you want to configure EIGRP for IPv6:

– No shutdown inside EIGRP configuration mode

– Similarly to OSPFv3, we need an interface configured with IPv4 address for Router-id

IPv6 IPSec

  • Using IPv6 IPSec VTI
Table3: IPSec configuration
  R1 R2
Set the type of ISAKMP authentication crypto keyring keyring1
pre-shared-key address ipv6 2001:DB8::2/128 key cisco
crypto keyring keyring1
pre-shared-key address ipv6 2001:DB8::1/128 key cisco
  crypto isakmp key cisco address ipv6 2001:DB8::2/128 crypto isakmp key cisco address ipv6 2001:DB8::1/128
ISAKMP profile crypto isakmp policy 10
encr 3des
hash md5
authentication pre-share
lifetime 3600
crypto isakmp policy 10
encr 3des
hash md5
authentication pre-share
lifetime 3600
Transform sets: symmetric encryption and signed hash algorithms crypto ipsec transform-set 3des ah-sha-hmac esp-3des crypto ipsec transform-set 3des ah-sha-hmac esp-3des
  crypto ipsec profile profile0
set transform-set 3des
crypto ipsec profile profile0
set transform-set 3des
Tunnel mode and bind the ipsec profile interface Tunnel12
ipv6 address FE80::DB8:12:1 link-local
ipv6 address 2001:DB8:12::1/64
tunnel source FastEthernet0/0
tunnel destination 2001:DB8::2
tunnel mode ipsec ipv6
tunnel protection ipsec profile profile0
interface Tunnel12
ipv6 address FE80::DB8:12:2 link-local
ipv6 address 2001:DB8:12::2/64
tunnel source FastEthernet0/0
tunnel destination 2001:DB8::1
tunnel mode ipsec ipv6
tunnel protection ipsec profile profile0
Make sure to not advertise the routes through the physical interface to avoid RPF failures (when the source of the multicast traffic is reached from an different interface than the one provided by the RIB) interface FastEthernet0/0
ipv6 address FE80::DB8:1 link-local
ipv6 address 2001:DB8::1/64
ipv6 enable
interface FastEthernet0/0
ipv6 address FE80::DB8:2 link-local
ipv6 address 2001:DB8::2/64
ipv6 enable
 

Here is a capture of the traffic (secured) between R1 and R2 gateways

Picture2: Wireshark IPv6 IPSec trafic capture

IPv6-IPSec-VTI

What could go wrong?

– Encryption doesn’t match

– Shared key doesn’t match

– Wrong ISAKMP peers

– ACL in the path between the 2 gateways blocking gateways IPs or protocol 500

– IPSec profile no assigned to the tunnel int ( tunnel protection ipsec profile < …>)

– Ipsec Encryption and/or signed hashes don’t match.

  • Using OSPFv3 IPSec security feature

You still can use IPv6 IPSec to encrypt and authenticate only OSPF per-interface basis.

OSPFv3 will use the IPv6-enabled IP Security (IPsec) secure socket API.

R1

interface FastEthernet0/0
ipv6 ospf 12 area 0
ipv6 ospf encryption ipsec spi 256 esp 3des 123456789A123456789A123456789A123456789A12345678 md5 123456789A123456789A123456789A12

R2

interface FastEthernet0/0
ipv6 ospf 12 area 0
ipv6 ospf encryption ipsec spi 256 esp 3des 123456789A123456789A123456789A123456789A12345678 md5 123456789A123456789A123456789A12

Picture4: Wireshark traffic capture – OSPFv3 IPSec feature :

ipv6-ospf-feature

Note only OSPFv3 traffic is encrypted

IPv6 Multicast

IPv6 PIM BSR

The RP (Rendez-vous point) is the point where multicast server offer meets member’s demand.

First hop routers build (S,G) source trees with candidate RPs and register directly connected multicast sources.

Candidate- RPs announce themselves to candidate-BSRs, and the latter announce the inf. to all PIM routers.

All PIM routers looking for a particular multicast group learn Candidate RP IP addresses from BSR and build (*, G) shared trees.

Table4: Multicast configuration

  R1(candidate RP) R2(candidate BSR)
Enable multicast routing ipv6 multicast-routing ipv6 multicast-routing
R1 announced as BSR candidate ipv6 pim bsr candidate bsr 2001:DB8:10::1  
R2 announced as RP candidate   ipv6 pim bsr candidate rp 2001:DB8:20::2
Everything should be routed through the tunnel interface, to be encrypted ipv6 route ::/0 Tunnel12 FE80::DB8:12:2 ipv6 route ::/0 Tunnel12 FE80::DB8:12:1
For testing purpose, make one router join a multicast traffic and ping it from a LAN router on the other side or you can opt for more fun by running VLC on one host to read a network stream and stream a video from a host on the other side.   interface FastEthernet0/1
ipv6 mld join-group ff0E::5AB

Make sure that:

  • At least one router is manually configured as a candidate RP
  • At least one router is manually configured as a candidate BSR
During multicasting of the traffic, sll PIM routers knows about the RP and the BSR

– (*,G) shared tree is spread over PIM routers from the last hop router (connected to multicast members).

– (S,G) source tree is established between the first hop router (connected to the multicast server) and the RP.

– The idea behind IPv6 PIM BSR is the same as in IPv4; here an animation explaining the process for IPv4.

Let’s check end-to-end multicast streaming:

Before going to troubleshooting here is the offline lab with all commands:

Troubleshooting

If something doesn’t work and you are stuck, isolate the area of work and inspect each process separately step by step.

Check each step using “show…” commands, so you know each time what you are looking for to spot what is wrong.

“sh run” and script comparison technique is limited by the visual perception capability which is illusory and far from being reliable.

Common routing issues

– Make sure you have successful back-to-back connectivity everywhere.

– With EIGRP for IPv6 make sure the process is enabled.

– If routing neighbors are connected through NBMA network, make sure to enable pseudo broadcasting and manually set neighbor commands.

Common IPSec issues

– ISAKMP phase

– Wrong peer

– Wrong shared password

– Not matching isakmp profile

– IPSec phase

– Not matching ipsec profile

Common PIM issues

– If routing neighbors are connected through NBMA network, make sure C-RPs and C-BSRs are locate on the main site.

– Issue with the client: => no (*,G)

– MLD query issue with the last hop.

– Last hop PIM router cannot build the shared tree.

– Issue with RP registration  => no (S,G)

– Multicast server MLD issue with the 1st hop router

– 1st hop router cannot register with th RP.

– Issue with C-BSR candidate doesn’t advertise RP inf. to PIM routers (BSRs collect all candidate RPs and announce them to all PIM routers to choose the best RP for each group)

– Issue with C-RP candidate doesn’t announce themselves to C-BSRs (RPs announce to C-BSRs which multicast groups they are responsible for)

-RPF failure (the interface used to reach the multicast source, through RIB, is not the interface sourcing the multicast traffic)

Picture5: RPF Failure

Replace test case 6 with RPF failure (enable PIM & routing through physical int.)

Table5: troubleshooting cases
Case Description Simulated wrong configuration Correct configuration
ISAKMP policy, encryption key mismatch crypto isakmp policy 10
encr aes
crypto isakmp policy 10
encr 3des
2 ISAKMP policy, Hash algorithm mismatch crypto isakmp policy 10
Hash sha
crypto isakmp policy 10
Hash md5 
3 Wrong ISAKMP peer crypto isakmp key cisco address ipv6 2001:DB8::3/128 crypto isakmp key cisco address ipv6 2001:DB8::2/128 
4 Wrong ISAKMP key crypto isakmp key cisco1 address ipv6 2001:DB8::2/128 crypto isakmp key cisco address ipv6 2001:DB8::2/128 
5 Wrong tunnel destination interface Tunnel12
tunnel destination 2001:DB8::3
interface Tunnel12
tunnel destination 2001:DB8::2 
6 Wrong tunnel source interface Tunnel12
tunnel source FastEthernet0/1
interface Tunnel12
tunnel source FastEthernet0/0
 

For more details about each case, refer to the offline lab below, you will find an extensive coverage of all important commands along with debug for each case:

Performance testing

Three cases are tested: multicast traffic between R1 and R2 is routed through:

– Physical interfaces (serial connection): MTU=1500 bytes

– IPv6 GRE: MTU=1456 bytes

– IPv6 IPSec VTI: MTU=1391 bytes

The following tests are performed using iperf in GNS3 lab environment, so results are to keep relative.

Picture6: Iperf testing

perfs

References

http://www.faqs.org/rfcs/rfc6226.html

http://tools.ietf.org/html/rfc5059

http://www.cisco.com/en/US/docs/ios/ipv6/configuration/guide/ip6-multicast.html#wp1055997

https://supportforums.cisco.com/docs/DOC-27971

http://www.cisco.com/en/US/docs/ios/ipv6/configuration/guide/ip6-tunnel_external_docbase_0900e4b1805a3c71_4container_external_docbase_0900e4b181b83f78.html

http://www.cisco.com/web/learning/le21/le39/docs/TDW_112_Prezo.pdf

http://networklessons.com/multicast/ipv6-pim-mld-example/

http://www.gogo6.com/profiles/blogs/ietf-discusses-deprecating-ipv6-fragments

http://tools.ietf.org/html/draft-taylor-v6ops-fragdrop-01

https://datatracker.ietf.org/doc/draft-bonica-6man-frag-deprecate

http://blog.initialdraft.com/archives/1648/

QoS and IPSec interactions


QoS Differentiated services efficiency depends on the consistency and the coherence of QoS policy deployed on a per-hop basis (PHB) along the traffic path.

Some services like IPSec encryption or tunnelling can cause issues to QoS. The purpose of this article is to clarify these interactions.

 

Outline

  • Overview.
  • Conclusions.
  • Examples of deployment (Lab1,Lab2)

 

Overview

Interactions between QoS and IPSec are based on three principles:

  • Order of operations
  • QoS criteria
  • QoS policy location

 

  1. Order of operations: By default IOS performs tunnelling and VPN operations first and then apply QoS policy.

     

    Figure1: default order of operation

    With QoS pre-classification the previous order is inversed: QoS policy is performed first and then tunnelling and VPN processes.

     

    Figure2: QoS pre-classification

    Well, Technically the QoS operation is still performed after IPSec, but using original header fields preserved in a “temporary memory Structure”.

     

  2. QoS criteria:
    What your QoS policy is looking for?

    With GRE tunnelling or IPSec encryption, a new header is built and only ToS field is copied by default from the original to the new tunnel or IPSec header (tunnel mode). So, caution if your classification criteria are based on other fields than ToS/DSCP!

     

    Figure3: TOS/DSCP preservation

     

  3. QoS policy location
    : QoS traffic classification is based on inspection of IP header fields like addresses, PID, ports, ToS …

    In fact, what is visible to QoS process depends on where your QoS policy is placed:

  • On the tunnel interface, before header modification (tunnelling and VPN operations).
  • On the physical interface, after header modification (tunnelling and VPN operations).

I hope the following illustrations will provide extra perception how QoS and IPSec are related.

 

Figure 4: ONLY QoS policy applied to physical interface (header visible)

 

Figure 5: IPSec + QoS policy applied to physical interface (only ToS preserved)

 

Figure 6: IPSec + QoS pre-classification (original header visible)

 

 

 

 

Figure 7: IPSec + QoS policy applied to tunnel interface (original IP header visible)

 

Table 1 summarises all combinations of the previously mentioned cases:

 

Table 1: summary

cases 1 2 3 4 5 6 7 8
QoS policy applied to physical int. X X X X
tunnel int. X X X X
Order of operations Default behaviour X X X x
QoS pre-classification X X X X
QoS policy criteria ONLY ToS X X X x
Other field than ToS (IP, ports) X x X x
Results QoS succeed QoS succeed QoS succeed QoS succeed QoS succeed QoS succeed QoS fails QoS succeed

Conclusions:

 

QoS pre-classification is needed when:

• Classification is based on packet IP header information (src/dst IP, PID, ports nbr., flags…)

&

• Service policy is applied to the physical interface (def. order of processes)

 

 

QoS pre-classification is NOT needed when:

  • Classification is based ONLY on ToS criterion.

Or

  • QoS Service policy is applied to tunnel interface (before performing VPN)

 

Lab 1 : IPSec applied to the physical interface

1-a)

  • Default QoS order of operations (IPSec -> QoS)
  • QoS is based on both DSCP and IP criteria

Figure5: IPSec encryption

R3:

crypto isakmp policy 101

encr 3des

authentication pre-share

group 2

crypto isakmp key cisco address 172.16.12.3

!

!

crypto ipsec transform-set MYTRANSFORMSET esp-3des esp-sha-hmac

!

crypto ipsec profile MYIPSECPROFILE

set transform-set MYTRANSFORMSET

!

!

crypto map MYCRYPTOMAP 10 ipsec-isakmp

set peer 172.16.12.3

set transform-set MYTRANSFORMSET

match address IPSECACL

!

!

class-map match-all MYNOTIPMAP

match not access-group name IPSECACL

match ip dscp af11

class-map match-all MYTOS5MAP

match access-group name IPSECACL

match ip dscp af11

class-map match-all MYNOTTOS5MAP

match access-group name IPSECACL

match not ip dscp af11

!

!

policy-map MYQOSPOLICY

class MYTOS5MAP

bandwidth 100

class MYNOTTOS5MAP

drop

class MYNOTIPMAP

drop

class class-default

!

interface FastEthernet0/1

ip address 172.16.12.4 255.255.255.0

crypto map MYCRYPTOMAP

service-policy output MYQOSPOLICY

!

ip access-list extended IPSECACL

permit icmp host 192.168.2.7 host 192.168.1.6

 

IPSec traffic (new IP Sec ESP header) is captured by the class “MYNOTIPMAP” and drop policy applied

 

R3#sh policy-map int fa0/1

FastEthernet0/1

 

Service-policy output: MYQOSPOLICY

 

Class-map: MYTOS5MAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: ip dscp af11 (10)

Queueing

Output Queue: Conversation 265

Bandwidth 100 (kbps)Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 48/0/0

 

Class-map: MYNOTTOS5MAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: not ip dscp af11 (10)

drop

 

Class-map: MYNOTIPMAP (match-all)


66 packets, 10956 bytes

5 minute offered rate 0 bps, drop rate 55000 bps

Match: not access-group name IPSECACL

Match: ip dscp af11 (10)


drop

 

Class-map: class-default (match-any)

15 packets, 1520 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: any

R3#

R3 :

R3#sh cry isa sa

IPv4 Crypto ISAKMP SA

dst src state conn-id slot status

172.16.12.4 172.16.12.3 QM_IDLE 1001 0 ACTIVE

 

IPv6 Crypto ISAKMP SA

 

R3#

 

ICMP traffic is generated from R6 toward R7 with DSCP=af11

 

R6#ping

Protocol [ip]:

Target IP address: 192.168.2.7

Repeat count [5]: 100000

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Source address or interface: 40

% Invalid source

Source address or interface:

Type of service [0]: 40

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 100000, 100-byte ICMP Echos to 192.168.2.7, timeout is 2 seconds:

…………………………………………………………….

  • RESULT = NOK

1-b) Apply QoS pre-classification (QoS -> IPSec)

R3:

crypto map MYCRYPTOMAP 10 ipsec-isakmp

set peer 172.16.12.4

set transform-set MYTRANSFORMSET

match address IPSECACL

qos pre-classify

!

QoS is performed 1st (class “MYTOS5MAP” is triggered), and then IPSec is performed.

R3#sh policy-map int fa0/1

FastEthernet0/1

 

Service-policy output: MYQOSPOLICY

 

Class-map: MYTOS5MAP (match-all)

1257 packets, 143298 bytes

5 minute offered rate 6000 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: ip dscp af11 (10)

Queueing

Output Queue: Conversation 265

Bandwidth 100 (kbps)Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0

 

Class-map: MYNOTTOS5MAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: not ip dscp af11 (10)

drop

 

Class-map: MYNOTIPMAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: not access-group name IPSECACL

Match: ip dscp af11 (10)

drop

 

Class-map: class-default (match-any)

31 packets, 4737 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: any

R3#

From R6 source of the traffic:

R6#ping

Protocol [ip]:

Target IP address: 192.168.2.7

Repeat count [5]: 1000000

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Source address or interface:

Type of service [0]: 40

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 1000000, 100-byte ICMP Echos to 192.168.2.7, timeout is 2 seconds:

…………………………U.U.U.U.U………!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  • RESULT = OK

Lab2 : IPSec applied to the GRE tunnel

  • 2-a):
    • Default QoS order of operations (IPSec -> QoS)
    • QoS is based on both DSCP and IP criteria

Figure5: IPSec GRE tunnel encryptions

 

 

interface Tunnel0

crypto map MYCRYPTOMAP

!

interface FastEthernet0/1

service-policy output MYQOSPOLICY

R3#sh policy-map int fa0/1

FastEthernet0/1

 

Service-policy output: MYQOSPOLICY

 

Class-map: MYTOS5MAP (match-all)


0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: ip dscp af11 (10)

Queueing

Output Queue: Conversation 265

Bandwidth 100 (kbps)Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0

 

Class-map: MYNOTTOS5MAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: not ip dscp af11 (10)

drop

 


Class-map: MYNOTIPMAP (match-all)

129 packets, 24510 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: not access-group name IPSECACL

Match: ip dscp af11 (10)

drop

 

Class-map: class-default (match-any)

30 packets, 3060 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: any

R3#

  • RESULT = NOK

2-b) Apply QoS pre-classification (QoS -> IPSec)

 

R3#

interface Tunnel0

crypto map MYCRYPTOMAP

!

interface FastEthernet0/1

service-policy output MYQOSPOLICY

!

crypto map MYCRYPTOMAP 10 ipsec-isakmp

qos pre-classify

R3#sh policy-map int fa0/1

FastEthernet0/1

 

Service-policy output: MYQOSPOLICY

 

Class-map: MYTOS5MAP (match-all)

1689 packets, 320910 bytes

5 minute offered rate 15000 bps, drop rate 0 bps

Match: access-group name IPSECACL


Match: ip dscp af11 (10)

Queueing

Output Queue: Conversation 265

Bandwidth 100 (kbps)Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0

 

Class-map: MYNOTTOS5MAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: not ip dscp af11 (10)

drop

 

Class-map: MYNOTIPMAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: not access-group name IPSECACL

Match: ip dscp af11 (10)

drop

 

Class-map: class-default (match-any)

2 packets, 120 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: any

R3#

  • RESULT = OK

What is DMVPN ?


The complexity of DMVPN resides in the multitude of concepts involved in this technology: NHRP, mGRE, and IPSec. So to demystify the beast it crucial to enumerate the advantages, disadvantages, and conditions related to different NBMA topologies and their evolution.

Spokes with permanent public addresses

Hub and Spoke topology

Pro:

– Ease of configuration on spokes, only HUB parameters are configured and the HUB routes all traffic between spokes.

Con:

– Memory and CPU resource consumption on the HUB.

– Static configuration burdensome, prone to errors and hard to maintain for very large networks.

– Lack of scalability and flexibility.

– No security, network traffic is not protected.

Full/partial mesh topology

Pros:

– Each spoke is able to communicate with other spokes directly.

Cons:

– Static configuration burdensome, prone to errors and hard to maintain for very large networks.

– Lack of scalability and flexibility.

– Additional memory and CPU resource requirements on branch routers for just occasional and non-permanent spoke-to-spoke communications.

– No security, network traffic is not protected.

Point-to-point GRE

Pro:

– Lack of scalability: need static configuration between each spoke and the HUB in a HUB and Spoke topology; and between each pair of spokes in a full/partial mesh topology.

– GRE supports IP broadcast and multicast to the other end of the tunnel.

– GRE is a unicast protocol, so can be encapsulated using IPSec and provide routing/multicasting in a protected environment.

Cons:

– Lack of security.

Point-to-multipoint GRE

Pros:

– A single tunnel interface can terminate all GRE tunnels from all spokes.

– No configuration complexity and resolves the issue of memory allocations.

Cons:

– Lack of security.

Full/partial mesh topology + IPSec

Pros:

– Each spoke is able to communicate with other spokes directly.

– Security.

Cons:

– Static configuration burdensome, prone to errors and hard to maintain.

– Lack of scalability and flexibility.

– Additional memory and CPU resource requirements on branch routers for just occasional and non-permanent spoke-to-spoke communications.

– IPsec doesn’t support multicast/broadcast, so cannot deploy routing protocols.

– Need pre-configured access-list for interesting traffic that will trigger IPSec establishment so need manual intervention in case applications changes.

– IPSec establishment will take [1-10] seconds, hence packet drops in the beginning.

Hub and Spoke topology + IPSec

Pro:

– Ease of configuration on spokes, only HUB parameters are configured and the router routes all traffic between spokes.

Con:

– Memory and CPU resource consumption on the HUB.

– Static configuration burdensome, prone to errors and hard to maintain in very large networks.

– Lack of scalability and flexibility.

– IPSec doesn’t support multicast/broadcast, so cannot deploy routing protocols.

– IPSec needs pre-configured access-list for interesting traffic that will trigger IPSec establishment.

– IPSec establishment will take [1-10] seconds, so packet drops.

Point-to-point GRE + IPSec

Pro:

– Lack of scalability: need static configuration between each spoke and the HUB in a HUB and Spoke topology; and between each pair of spokes in a full/partial mesh topology.

– GRE supports IP broadcast and multicast to the other end of the tunnel.

– GRE is a unicast protocol, so can be encapsulated using IPSec and provide routing/multicasting in a protected environment.

– Security.

Cons:

– Need pre-configured access-list for interesting traffic that will trigger IPSec establishment so need manual intervention in case applications changes.

– IPSec establishment will take [1-10] seconds, hence packet drops in the beginning.

Point-to-multipoint GRE + IPSec

Pros:

– A single tunnel interface can terminate all GRE tunnels from all spokes.

– No configuration complexity and resolves the issue of memory allocations.

Cons:

– Need pre-configured access-list for interesting traffic that will trigger IPSec establishment so need manual intervention in case applications changes.

– IPSec establishment will take [1-10] seconds, hence packet drops in the beginning.

Spokes with dynamic public addresses

Issue:

Whether it is GRE, mGRE, Hub and Spoke, full mesh, on HUB or on spokes, tunnel establishment require pre-configured tunnel source and destination.

Here comes NHRP (Next-Hop Resolution Protocol).

NHRP is used by spokes when startup to provide the HUB with the dynamic public ip and the associated tunnel ip.

NHRP is used by the HUB to respond to spokes requests about each other public ip addresses.

So the overall solution will be Point-to-multipoint GRE + IPSec + NHRP which is called DMVPN (Dynamic Multipoint VPN).

You will find the previously mentioned topologies in the subcategory “DMVPN” of the category “Security”

I reserved a separated sub-category called “DMVPN” inside the parent category “Security” in which  I will post the previously mentioned topologies.

Building DMVPN with mGRE, NHRP and IPSec VPN


 I – OVERVIEW

This lab will treat the design and deployment of dynamic multipoint VPN architectures by moving step by step into the configuration and explaining how mGRE (multipoint Generic Router Encapsulation), NHRP (Next-Hop Resolution Protocol) and IPsec VPN are mixed to build a dynamic secure topology over the Internet for large enterprises with hundreds of sites.

Figure1: physical Topology


 

 

A hypothetical enterprise (figure1) with a central office (HUB) and several branch office sites (spokes) connecting over the Internet is facing a rapid business grow and more and more direct connections between branch offices are needed.

Spokes are spread over different places where it is not always possible to afford a static public addresses therefore company needs more scalable method than a simple Hub and spoke with point-to-point tunneling or a full mesh topology where the administration tasks of the IPSec traffic become extremely burdensome.

 

II – DEPLOYMENT

Figure2: Address scheme


 

The DMVPN model provides a scalable configuration for a dynamic-mesh VPN with the only static relationships configured are those between spokes and the HUB.

DMVPN concepts include several components like mGRE, NHRP and IPSec Proxy instantiation that will be explained during this lab.

 

II-1 Address scheme:

Each site has its own private address space 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24 and 10.0.4.0/24 for central site, site1, site2 and site3 respectively.

Each spoke router obtains its public IP address dynamically from the ISP; only the HUB has a static permanent public IP.

Table1: address scheme

Network 

Addresses 

OSPF Area 10 – central site  10.0.1.0/24 
OSPF Area 11 – site 1  10.0.2.0/24 
OSPF Area 12 – site 2  10.0.3.0/24 
OSPF Area 13 – site 3 10.0.4.0/24 
Multipoint GRE network  172.16.0.128/26 
Area 0 link subnet between HUB and Internet  192.168.0.16/30 
Area 0 link between SPOKEA and Internet  192.168.0.12/30 
Area 0 link between SPOKEB and Internet  192.168.0.8/30 
Area 0 link between SPOKEC and Internet 192.168.0.4/30 

 

II-2 Multipoint Generic Router Encapsulation – mGRE:

Table2: Tunnel configuration guideline

Router HUB 
Interface tunnel 0 

Ip /mask 

172.16.0.129/28 

Source ip 

S1/0 

Dest ip 

 

Tunnel type 

GRE multipoint 

 
Router SPOKEA 
Interface tunnel 0 

Ip /mask 

172.16.0.130/28 

Source ip 

S1/0 

Dest ip 

 

Tunnel type 

GRE multipoint 

 
Router SPOKEB 
Interface tunnel 0 

Ip /mask 

172.16.0.131/28 

Source ip 

S1/0 

Dest ip 

 

Tunnel type 

GRE multipoint 

 
Router SPOKEC
Interface tunnel 0 

Ip /mask 

172.16.0.132/28 

Source ip 

S1/0 

Dest ip 

 

Tunnel type 

GRE multipoint 

 

A Hub n Spoke point-to-point GRE would be a heavy burden for the administration of the topology because all spokes IP addresses must be known in advance.

Moreover such traditional solutions consume lot of CPU and memory resources because each tunnel will create its own IDB (Interface Descriptor Block).

The alternative for point-to-point GRE will be multipoint GRE where a single interface will terminate all spokes GRE tunnels and will consume a single IDB and conserves interface memory structures and interface process management on the HUB.

With point-to-multipoint GRE the tunnel source should be public routable, in our case we set mGRE packet to take serial 1/0 as an output interface to the SP network, the tunnel destination address is dynamically assigned.

Tunnel source and destination form the outer IP header that will carry the encapsulated traffic throughout the physical topology.

All tunnels participating to mGRE network are identified by a tunnel key ID and belongs to the same subnet 172.16.0.129/26, from the private address space and routable only inside the company’s network.

HUB:

interface Tunnel0

ip address 172.16.0.129 255.255.255.192

!!The mGRE packet will be encapsulated out of the physical interface

tunnel source Serial1/0

!!Enable mGRE

tunnel mode gre multipoint

!! Tunnel identification key, match the NHRP network-id

tunnel key 1

SPOKEA :

interface Tunnel0

ip address 172.16.0.130 255.255.255.192

tunnel source Serial1/0

tunnel mode gre multipoint

tunnel key 1

SPOKEB :

interface Tunnel0

ip address 172.16.0.131 255.255.255.192

tunnel source Serial1/0

tunnel mode gre multipoint

tunnel key 1

SPOKEC :

interface Tunnel0

ip address 172.16.0.132 255.255.255.192

tunnel source Serial1/0

tunnel mode gre multipoint

tunnel key 1

So far, all tunnel protocol state still down, because tunnels destination addresses are unknown, so unreachable.

The HUB router knows that traffic destined to 172.16.0.128/26 will be forwarded to the serial interface (tunnel source) to be encapsulated into the needed mGRE tunnel and all other traffic will be reachable through the next hop physical interface 192.168.0.18, therefore no particular information about spokes on the HUB except that they belong to 172.16.0.128/26.

 

II-3 Static routing:

HUB:

ip route 0.0.0.0 0.0.0.0 192.168.0.18

ip route 172.16.0.0 255.255.255.192 Serial1/0

As opposed to the HUB, all spokes know the HUB physical IP address 192.168.0.17 and how to reach it statically (through s1/0). All other traffic will be forwarded to the next hop physical interface IP address.

On SPOKEA:

!! Forward all traffic to the ISP next-hop destination

ip route 0.0.0.0 0.0.0.0 192.168.0.14

!! The HUB physical interface is reachable through serial1/0

ip route 192.168.0.17 255.255.255.255 Serial1/0

On SPOKEB:

ip route 0.0.0.0 0.0.0.0 192.168.0.10

ip route 192.168.0.17 255.255.255.255 Serial1/0

On SPOKEC:

ip route 0.0.0.0 0.0.0.0 192.168.0.5

ip route 192.168.0.17 255.255.255.255 Serial1/0

 

II-4 Dynamic routing:

Figure3: routing topology


Each spoke protects a set of private subnet, not known by others; all other routers must be informed of these subnets.

With distance vector protocols like RIP or EIGRP the specification of “no-split horizon” is required for the updates to be sent back to the mGRE interface to other spokes.

With Link State protocols like OSPF the appropriate next-hop is automatically reflected within the subnet.

Typically OSPF is configured in point-to-multipoint mode on mGRE interfaces, this cause spokes to install routes with the HUB as next-hop, which negates DMVPN network topology concept, for that reason DMVPN cloud must be treated as broadcast.

 

NBMA – Non Broadcast Multiple Access are networks where multiple hosts/devices are connected, but data is sent directly to the destination over a virtual circuit or switching fabric like ATM, FR or X.25.

Broadcast networks – All hosts connected to the network listen to the media but only the host/device to which the communication is intended will receive the frame.

 

Cisco prefer Distance Vector protocols and recommend using EIGRP for large scale deployments.

Spokes OSPF priorities are set to 0 to force them in DROTHER mode and never become DR/BDR, whereas the HUB priority is set to a higher value “10” for instance.

All routers must be configured to announce the mGRE subnet 172.16.0/128 as area 0 and 10.0.1.0/24, 10.0.2.0/24, 10.0.3.0/24 and 10.0.4.0/24 as OSPF non-backbone area10, area11, area12 and area13 respectively.

Note that the routing protocol is dealing only with mGRE logical topology.

HUB:

interface Tunnel0

ip ospf network broadcast

ip ospf priority 10

!

router ospf 10

router-id 1.1.0.1

!! Announce local networks.

network 10.0.1.0 0.0.0.255 area 10

!! Announce mGRE network as OSPF backbone

network 172.16.0.128 0.0.0.63 area 0

SPOKEA:

interface Tunnel0

ip ospf network broadcast

ip ospf priority 0

!

router ospf 10

router-id 1.1.1.1

network 10.0.2.0 0.0.0.255 area 11

network 172.16.0.128 0.0.0.63 area 0

SPOKEB:

interface Tunnel0

ip ospf network broadcast

ip ospf priority 0

!

router ospf 10

router-id 1.1.2.1

network 10.0.3.0 0.0.0.255 area 12

network 172.16.0.128 0.0.0.63 area 0

SPOKEC:

interface Tunnel0

ip ospf network broadcast

ip ospf priority 0

!

router ospf 10

router-id 1.1.3.1

network 10.0.4.0 0.0.0.255 area 13

network 172.16.0.128 0.0.0.63 area 0

 

II-5 Next Hop Resolution Protocol – NHRP:

NHRP role is to discover address of other routers/hosts connected to a NBMA. Generally NBMA networks require tedious static configuration (static mapping between network layer address).

NHRP provides ARP like resolution and dynamically learn addresses of devices connected to the same NBMA networks from a next-hop server and then directly communicate them without passing through intermediate systems like in traditional Hub and Spoke topologies.

In our case the NHRP will facilitate building GRE tunnels. The HUB will maintain NHRP database of spokes mGRE virtual addresses associated to their public interface addresses. Each spoke registers its public address when it boots and queries the NHRP database for other spokes IP when needed.

On each tunnel interface, NHRP must be enabled and identified, it is recommended that the NHRP network-id match tunnel key.

NHRP participants can optionally be authenticated.

Because NHRP is a client-server protocol, the server (HUB) doesn’t need to know clients, nevertheless clients have to know the server.

Spokes explicitly set HUB as the next-hop server and statically tie the HUB virtual tunnel IP to the HUB physical interface s1/0. In the same manner spokes must map multicast forwarding to the HUB virtual mGRE IP address.

The same configuration is required in the HUB site but without any mapping or next-hop servers just map multicast forwarding to any new dynamically created NHRP adjacency.

HUB:

interface Tunnel0

ip nhrp authentication cisco

ip nhrp map multicast dynamic

ip nhrp network-id 1

ip nhrp cache non-authoritative

ip ospf network broadcast

SPOKEA:

interface Tunnel0

ip nhrp authentication cisco

ip nhrp map multicast 192.168.0.17

ip nhrp map 172.16.0.129 192.168.0.17

ip nhrp network-id 1

ip nhrp nhs 172.16.0.129

SPOKEB:

interface Tunnel0

ip nhrp authentication cisco

ip nhrp map multicast 192.168.0.17

ip nhrp map 172.16.0.129 192.168.0.17

ip nhrp network-id 1

ip nhrp nhs 172.16.0.129

SPOKEC:

interface Tunnel0

ip nhrp authentication cisco

ip nhrp map multicast 192.168.0.17

ip nhrp map 172.16.0.129 192.168.0.17

ip nhrp network-id 1

ip nhrp nhs 172.16.0.129

 

II-6 IPSec VPN:

The only thing to do after setting IPSec phase 1 and phase 2 parameters is to define an IPSec profile and assign it to the mGRE tunnel interface.

Table3: IPSec parameters

PHASE 1 
IKE parameters 
 
IKE seq 1  
Auth. (def. rsa-sig)  pre-shared 
Encr. (def. des)  3des 
DH (def. group1) 
Hash (def. sha)  sha 
Lifetime (def. 86400)  86400 
Preshared  Key  cisco 
Addr.  – 

PHASE 2 

Profile  IPSecprofile 
Transform set  ESP-3DES-SHA  esp-3des  esp-sha-hmac  – 
Mode Transport 

 

!!IKE phase1 – ISAKMP

crypto isakmp policy 1

!! Symmetric key algorithm for data encryption

encr 3des

!! authentication type using static key between participants

authentication pre-share

!! asymmetric algorithm for establishing shared symmetric key across the network

group 2

!! preshared key and peer (dynamic)

crypto isakmp key cisco address 0.0.0.0 0.0.0.0

 

!!IKE phase2 – IPSEec

crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac

mode transport

 

crypto ipsec profile IPSecProfile

set transform-set ESP-3DES-SHA

 

interface Tunnel0

!!encapsulate IPSec inside mGRE

tunnel protection ipsec profile IPSecProfile

 

III – MONITORING

HUB:

First of all, Make sure those s1/0 and tunnel interfaces are up/up, otherwise review your static routing statements.

HUB#sh ip int brief

Interface IP-Address OK? Method Status Protocol

FastEthernet0/0 unassigned YES NVRAM administratively down down

Serial1/0 192.168.0.17 YES NVRAM up up

Serial1/1 unassigned YES NVRAM administratively down down

Serial1/2 unassigned YES NVRAM administratively down down

Serial1/3 unassigned YES NVRAM administratively down down

Loopback0 10.0.1.1 YES NVRAM up up

Loopback1 1.1.0.1 YES manual up up

Tunnel0 172.16.0.129 YES NVRAM up up

HUB#

After spokes have registered to the HUB through NHRP protocol, the HUB have dynamically learned IP addresses of all participants of the mGRE (logical topology) and know how to reach them through the physical topology.

Spokes mGRE tunnel IP addresses are mapped to their physical public routable IP’s.

HUB#sh ip nhrp

172.16.0.130/32 via 172.16.0.130, Tunnel0 created 02:40:11, expire 01:22:18

Type: dynamic, Flags: unique registered

NBMA address: 192.168.0.13

172.16.0.131/32 via 172.16.0.131, Tunnel0 created 02:40:14, expire 01:21:45

Type: dynamic, Flags: unique registered

NBMA address: 192.168.0.9

172.16.0.132/32 via 172.16.0.132, Tunnel0 created 02:40:14, expire 01:21:54

Type: dynamic, Flags: unique registered

NBMA address: 192.168.0.6

HUB#

The HUB establishes mGRE tunnel with all spokes and initiate OSPF neighbor relationship with each one of them.

All routers will exchange routing information through the HUB.

HUB#sh ip ospf neigh

 

Neighbor ID Pri State Dead Time Address Interface

1.1.1.1 0 FULL/DROTHER 00:00:32 172.16.0.130 Tunnel0

1.1.2.1 0 FULL/DROTHER 00:00:35 172.16.0.131 Tunnel0

1.1.3.1 0 FULL/DROTHER 00:00:32 172.16.0.132 Tunnel0

HUB#

Now the HUB knows about all spokes local networks.

HUB#sh ip route

Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP

D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is 192.168.0.18 to network 0.0.0.0

 

1.0.0.0/32 is subnetted, 1 subnets

C 1.1.0.1 is directly connected, Loopback1

172.16.0.0/26 is subnetted, 2 subnets

C 172.16.0.128 is directly connected, Tunnel0

S 172.16.0.0 is directly connected, Serial1/0

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O IA 10.0.3.1/32 [110/11112] via 172.16.0.131, 02:16:51, Tunnel0

O IA 10.0.2.1/32 [110/11112] via 172.16.0.130, 02:16:51, Tunnel0

C 10.0.1.0/24 is directly connected, Loopback0

O IA 10.0.4.1/32 [110/11112] via 172.16.0.132, 02:16:51, Tunnel0

192.168.0.0/30 is subnetted, 1 subnets

C 192.168.0.16 is directly connected, Serial1/0

S* 0.0.0.0/0 [1/0] via 192.168.0.18

HUB#

 

HUB#ping

Protocol [ip]:

Target IP address: 10.0.2.1

Repeat count [5]:

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Source address or interface: 10.0.1.1

Type of service [0]:

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.0.2.1, timeout is 2 seconds:

Packet sent with a source address of 10.0.1.1

!!!!!

Success rate is 40 percent (2/5), round-trip min/avg/max = 80/116/152 ms

HUB#

 

HUB#ping

Protocol [ip]:

Target IP address: 10.0.4.1

Repeat count [5]:

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Source address or interface: 10.0.1.1

Type of service [0]:

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.0.4.1, timeout is 2 seconds:

Packet sent with a source address of 10.0.1.1

!!!!!

Success rate is 60 percent (3/5), round-trip min/avg/max = 104/128/144 ms

HUB#

 

SPOKEA:

Routing information is exchanged between all routers via the HUB, this is the only OSPF neighbor relationship spokes establish.

SPOKEA#sh ip ospf neigh

 

Neighbor ID Pri State Dead Time Address Interface

1.1.0.1 1 FULL/DR 00:00:32 172.16.0.129 Tunnel0

SPOKEA#

Nevertheless, advertised networks are directly reachable through the router that announced them.

SPOKEA#sh ip route


Gateway of last resort is 192.168.0.14 to network 0.0.0.0

 

1.0.0.0/32 is subnetted, 1 subnets

C 1.1.1.1 is directly connected, Loopback1

172.16.0.0/26 is subnetted, 1 subnets

C 172.16.0.128 is directly connected, Tunnel0

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O IA 10.0.3.1/32 [110/11112] via 172.16.0.131, 00:01:25, Tunnel0

C 10.0.2.0/24 is directly connected, Loopback0

O IA 10.0.1.1/32 [110/11112] via 172.16.0.129, 00:01:25, Tunnel0

O IA 10.0.4.1/32 [110/11112] via 172.16.0.132, 00:01:25, Tunnel0

192.168.0.0/24 is variably subnetted, 2 subnets, 2 masks

C 192.168.0.12/30 is directly connected, Serial1/0

S 192.168.0.17/32 is directly connected, Serial1/0

S* 0.0.0.0/0 [1/0] via 192.168.0.14

SPOKEA#

Spoke A knows about the next-hop server, the HUB, through a static map statement and will register by announcing to the HUB its physical public routable IP address assigned to its virtual mGRE IP.

SPOKEA#sh ip nhrp

172.16.0.129/32 via 172.16.0.129, Tunnel0 created 00:02:03, never expire

Type: static, Flags: authoritative used

NBMA address: 192.168.0.17

 

When spokeA wants to communicate with 10.0.4.1/32 it inspects the routing table and will find that 10.0.4.1 is reachable though the next-hop mGRE address 172.6.0.129.

 

SpokeA will ask the HUB about the public IP address that corresponds to 172.16.0.129, the HUB will reply with 192.168.0.6; only then, the spoke will send the traffic directly to SPOKEC.

 

According to the outputs of “debug tunnel” and a “traceroute” commands on spokeB, you can note that the next-hop of the mGRE network is the final destination and there is no intermediate hops, this is confirmed by the delivery through the physical topology where mGRE traffic is encapsulated and sent directly from 192.168.0.6 to 192.168.0.9 with no intermediate hops.

 

SPOKEB#debug tunnel

Tunnel Interface debugging is on

SPOKEB#traceroute 172.16.0.132

 

Type escape sequence to abort.

Tracing the route to 172.16.0.132

 

1 172.16.0.132 80 msec

*Mar 1 00:03:55.839: Tunnel0: GRE/IP to decaps 192.168.0.6->192.168.0.9 (len=84 ttl=253)

*Mar 1 00:03:55.843: Tunnel0: GRE decapsulated IP 172.16.0.132->172.16.0.131 (len=56, ttl=255) * 44 msec

SPOKEB#

 

Because DMVPN involves several concepts multipoint GRE, NHRP, dynamic routing and IPSec, troubleshooting issues can be very time consuming, so the best practice is to focus on each step and make sure things work well before configuring the next step.

%d bloggers like this: