MQC – Frame Relay policing and marking


In a previous post we have seen how to configure the Frame Relay cloud (service provider FR switch) to strictly enforce the committed contract with the customer by enabling policing on incoming interface and traffic shaping with congestion management on outgoing interface, nevertheless the customer can opt to specify to the service provider which packet should be dropped when a congestion is experienced in the FR cloud, which is the topic of this post.

Figure1: Lab topology

As depicted in figure1, shaping is configured on the Frame Relay switch outgoing interface with congestion management, in which all packets marked with “DE” bit will be discarded in case of s1/0 shaping queue congestion (DE threshold reached).

R2, the sender, will have the responsibility to mark for FRS all packets that it judge eligible for discard.

A traffic generator station connected to R2 will produce two different flows of traffic to the receiver station connected to R1, the first flow, traffic1, is supposed to represent a critical application traffic, the second, traffic2, is considered as junk traffic.

As shown below, in the first section different tests are performed without any policing and marking, then in the second section the same configurations are reproduced but with policing and marking:

1) Without policing and DE marking

1-a) Test1:

Critical application: traffic1 rate 32kbps – destination (192.168.201.10/udp 5001).

1-b) Test2:

Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).

Junk traffic: traffic2 – 32kbps – destination (192.168.201.10/udp 5003).

1-c) Test3:

Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).

Junk traffic: traffic2 – 64kbps – destination (192.168.201.10/udp 5003).

2) With policing and DE marking

2-a) Test4:

Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).

Junk traffic: traffic2 – 32kbps – destination (192.168.201.10/udp 5003).

2-b) Test5:

Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).

Junk traffic: traffic2 – 64kbps – destination (192.168.201.10/udp 5003).

 

1) Without policing and DE marking

1-a) Test1:

  • Critical application: traffic1 rate 32kbps – destination (192.168.201.10/udp 5001).
  • No policing at the sender R2.

FRS:

FRS_QOS#sh frame pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 77 output pkts 289 in bytes 26225

out bytes 379518 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 0

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 32000 bits/sec, 2 packets/sec

switched pkts 77

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 0 pkt above DE 0 policing drop 0

pvc create time 01:34:11, last time pvc status changed 00:11:43


Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 292 bytes 384024 pkts delayed 2 bytes delayed 936


shaping inactive

traffic shaping drops 0

Queueing strategy: fifo


Output queue 0/40, 0 drop, 6 dequeued

FRS_QOS#

The FRS output queue is not congested, so very little queuing and no drops

Figure2 : Traffic1 at the destination station

Note the jitter for traffic1 is almost 0 (good performance)

1-b) Test2:

  • Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).
  • Junk traffic: traffic2 – 32kbps – destination (192.168.201.10/udp 5003).
  • No policing at the sender R2.

R2:

R2(config-fr-dlci)#do sh frame pvc 102

 

PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 102, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.102

 

input pkts 120 output pkts 3754 in bytes 36944

out bytes 5535345 dropped pkts 0 in FECN pkts 0

in BECN pkts 0 out FECN pkts 0 out BECN pkts 0

in DE pkts 0 out DE pkts 0

out bcast pkts 36 out bcast bytes 11889

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 64000 bits/sec, 5 packets/sec

pvc create time 00:33:26, last time pvc status changed 00:31:56

R2(config-fr-dlci)#

 

R2(config-fr-dlci)#do sh int s0/0

Serial0/0 is up, line protocol is up

Hardware is PowerQUICC Serial

MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,

reliability 255/255, txload 10/255, rxload 1/255

Encapsulation FRAME-RELAY, loopback not set

Keepalive set (10 sec)

LMI enq sent 207, LMI stat recvd 207, LMI upd recvd 0, DTE LMI up

LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0

LMI DLCI 0 LMI type is ANSI Annex D frame relay DTE

Broadcast queue 0/64, broadcasts sent/dropped 37/0, interface broadcasts 0

Last input 00:00:04, output 00:00:00, output hang never

Last clearing of “show interface” counters 00:34:34

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: fifo

Output queue :0/40 (size/max)

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 64000 bits/sec, 5 packets/sec

328 packets input, 40351 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

1 input errors, 0 CRC, 1 frame, 0 overrun, 0 ignored, 0 abort

4340 packets output, 6106331 bytes, 0 underruns

0 output errors, 0 collisions, 1 interface resets

0 output buffer failures, 0 output buffers swapped out

2 carrier transitions

DCD=up DSR=up DTR=up RTS=up CTS=up

 

R2(config-fr-dlci)#

64kbps represents the aggregated bandwidth of the two applications: traffic1 and traffic2.

FRS:

FRS_QOS#sh frame pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 2 output pkts 506 in bytes 658

out bytes 757672 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 0

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec

30 second output rate 64000 bits/sec, 5 packets/sec

switched pkts 2

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 0 pkt above DE 0 policing drop 0

pvc create time 02:11:44, last time pvc status changed 00:49:17

Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 498 bytes 745656 pkts delayed 498 bytes delayed 745656

shaping active

traffic shaping drops 0

Queueing strategy: fifo

Output queue 23/40, 0 drop, 502 dequeued

FRS_QOS#

FRS output queue is congested but no dropping because DE threshold not reached.

Figure3 : Traffic1 at the destination station

Traffic1 received at the destination station at the same rate sending rate of 32kbps without losses but with a bigger delays and jitter (Figure3).

1-c) Test3:

  • Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).
  • Junk traffic: traffic2 – 64kbps – destination (192.168.201.10/udp 5003).
  • No policing at the sender R2.

R2:

R2(config-fr-dlci)#do sh int s0/0

Serial0/0 is up, line protocol is up

Hardware is PowerQUICC Serial

MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,

reliability 255/255, txload 16/255, rxload 1/255

Encapsulation FRAME-RELAY, loopback not set

Keepalive set (10 sec)

LMI enq sent 255, LMI stat recvd 255, LMI upd recvd 0, DTE LMI up

LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0

LMI DLCI 0 LMI type is ANSI Annex D frame relay DTE

Broadcast queue 0/64, broadcasts sent/dropped 45/0, interface broadcasts 0

Last input 00:00:03, output 00:00:00, output hang never

Last clearing of “show interface” counters 00:42:34

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: fifo

Output queue :0/40 (size/max)

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 97000 bits/sec, 8 packets/sec

406 packets input, 47591 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

1 input errors, 0 CRC, 1 frame, 0 overrun, 0 ignored, 0 abort

7079 packets output, 10124289 bytes, 0 underruns

0 output errors, 0 collisions, 1 interface resets

0 output buffer failures, 0 output buffers swapped out

2 carrier transitions

DCD=up DSR=up DTR=up RTS=up CTS=up

 

R2(config-fr-dlci)#

Figure4: effect of junk traffic (64kbps) traffic on critical traffic

As you can note from the figure4, the critical traffic is experiencing bad performance with packets drops and high jitter.

FRS:

FRS_QOS#sh frame pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 2 output pkts 575 in bytes 658

out bytes 863650 dropped pkts 311 in pkts dropped 0

out pkts dropped 311 out bytes dropped 464782

late-dropped out pkts 311 late-dropped out bytes 464782

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 0

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec

30 second output rate 63000 bits/sec, 5 packets/sec

switched pkts 2

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 311 pkt above DE 0 policing drop 0

pvc create time 02:07:04, last time pvc status changed 00:44:35

Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 581 bytes 872662 pkts delayed 581 bytes delayed 872662

shaping active

traffic shaping drops 0

Queueing strategy: fifo

Output queue 39/40, 318 drop, 587 dequeued

FRS_QOS# 

FRS output queue is heavily congested and DE threshold is reached, therefore packets marked with “DE” bit are dropped.

2) With policing and DE marking

2-a) Test4:

  • Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).
  • Junk traffic: traffic2 – 32kbps – destination (192.168.201.10/udp 5003).
  • Policing configured on R2.

There is two ways of configuring MQC Frame Relay shaping:

The first is to use traditional MQC with “match class” and match “fr-dlci” in a class map to specify which traffic will be policed (figure 5).

The second is to make sure the policed traffic fall in the “class-default” class map and apply the policy directly to the PVC (figure 6).

In our example we use the second one.

Figure 5: Apply policy to the main interface

Figure 6: Apply policy directly to a specific PVC

Parameters 

FR End-device, sender 

FR switch 

FR End-device, receiver 

Shaping (outbound)

Policing (outbound)

Policing (inbound) 

Shaping (outbound) holdq=40

Shaping (inbound) 

CIR 

64000

16000 

64kbps 

Congestion management 

 

DE 

70% 

Bc=CIR*Tc 

64000

2000 

8000 

 

Tc 

125ms

125ms 

125ms 

 

Be 

0

0 

0 

 

 

R2 (sender) configuration:

ip access-list extended TRAFFIC1

permit udp host 192.168.102.10 host 192.168.201.10 eq 5001

This ACL match the critical application traffic.

class-map match-all TRAFFIC1_cmap

match access-group name TRAFFIC1

!

policy-map POLICY_R2

class TRAFFIC1_cmap

class class-default

police cir 16000 bc 2000

conform-action transmit

exceed-action set-frde-transmit

An empty class “TRAFFIC1_map” for the critical traffic is included in the policy-map to separate all that is not traffic1 into “class-default” class (traffic2 included) so we can apply a two color policing (figure7) that transmit conformed traffic and mark all that exceed with “DE” bit.

map-class frame-relay FR_mapc_R2

frame-relay traffic-rate 64000 64000

service-policy output POLICY_R2

The overall traffic through the PVC is shaped at 64kbps and a subset of that traffic is policed at 16kbps (through the policy applied in the output direction).

interface Serial0/0.102 point-to-point


frame-relay interface-dlci 102


class FR_mapc_R2

Then the map-class is applied directly to the PVC 102

Figure7: 2-color policing


FRS:

FRS_QOS#sh frame pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 18 output pkts 1064 in bytes 3921

out bytes 1584660 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 86

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 63000 bits/sec, 5 packets/sec

switched pkts 18

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 0 pkt above DE 0 policing drop 0

pvc create time 01:39:33, last time pvc status changed 01:38:13


Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 1064 bytes 1584660 pkts delayed 563 bytes delayed 836566


shaping active

traffic shaping drops 0

Queueing strategy: fifo


Output queue 7/40, 0 drop, 570 dequeued

FRS_QOS#

Note that output packets are marked with “DE” bit but no dropping because the threshold of 70% (28 out of 40) is not reached, so both traffic1 and traffic2 are received by R1 at the sender rate as shown in figure 8 and figure 9.

Figure8: traffic1 at the receiver station

Figure9: traffic2 at the receiver station

R1:

R1#sh frame pvc 201

 

PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 201, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.201

 

input pkts 22963 output pkts 3143 in bytes 34172308

out bytes 598859 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 2774 out DE pkts 0

out bcast pkts 122 out bcast bytes 40129

5 minute input rate 57000 bits/sec, 2 packets/sec

5 minute output rate 4000 bits/sec, 2 packets/sec

pvc create time 02:01:04, last time pvc status changed 02:01:04

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 3144 bytes 599039 pkts delayed 24 bytes delayed 8574

shaping inactive

traffic shaping drops 0

Queueing strategy: fifo

Output queue 0/40, 0 drop, 24 dequeued

R1#

The above output show that traffic can be marked as DE along the path but still can reach the destination because there was no congestion in the FR cloud.

2-b) Test5:

  • Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).
  • Junk traffic: traffic2 – 64kbps – destination (192.168.201.10/udp 5003).
  • Policing configured on R2.

FRS:

FRS_QOS#sh frame-relay pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 4 output pkts 547 in bytes 2184

out bytes 817932 dropped pkts 111 in pkts dropped 0

out pkts dropped 111 out bytes dropped 166722

late-dropped out pkts 111 late-dropped out bytes 166722

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 420

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 62000 bits/sec, 5 packets/sec

switched pkts 4

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 111 pkt above DE 0 policing drop 0

pvc create time 01:46:49, last time pvc status changed 01:45:28


Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 553 bytes 826944 pkts delayed 553 bytes delayed 826944


shaping active

traffic shaping drops 0

Queueing strategy: fifo


Output queue 28/40, 111 drop, 553 dequeued

FRS_QOS#

Traffic2 higher that the police rate of 16kbps is marked as “DE” at R2 and because of traffic2 high rate of 64kbps, FRS output queue is congested and reach the “DE” threshold, therefore FRS start dropping traffic2 marked with “DE”.

According to figure10 and figure11 traffic 1 is still received at the sender rate but not traffic 2 because it is marked with “DE” therefore dropped by the FRS

Figure10: traffic1 at the receiver station

Figure11: policed traffic2 at the receiver station

CONCLUSION:

Discard Eligible marking and dropping mechanism certainly resolves the problem of application sensitivity to dropping but not the delay and jitter sensitivity caused by queuing, for this, a separated queuing mechanism is needed for delay/jitter sensitive applications.

Advertisements

Frame Relay Switching and Policing Lab


OVERVIEW

A large part of Frame Relay documentation is focused on the configuration of end devices more than the FR cloud, which can make practicing Frame Relay quality of service very confusing and frustrating without an organized approach, this begins by setting a FR lab for testing and deployment; this is the purpose of the current post .

R2 and R1, customer devices, connected to FRS the frame Relay switch through back-to-back serial cables, a traffic generator is connected to R2 will generate traffic, through the FR cloud at different rates, destined to the receiver station connected to R1 (figure1).

To understand the role and the impact of QoS concepts like shaping, policing, congestion management and related parameters, we will focus on each one of them separately.

Note: The mentioned topology can be deployed on dynamips and GNS3 with the exception that clock rate has no effect, and clocking is fixed to 2mbps for serial interface, so to simulate connection mismatch you can use multiple serial connections (n*2mbps) as spokes for example and aggregate them to one serial connection (2mbps).

Figure1: FR Lab topology

The lab is organized as follow:

1) Frame Relay connectivity

    1-a) FR End-device

    1-b) FR switch

2) Shaping queue and congestion on FR end device

    2-a) Traffic rate:

    2-b) Shaping queue size:

    2-c) Shaping parameters:

3) Shaping queue and congestion on FR Switch

4) FR Congestion management, FECN/BECN

5) FR Congestion management, Discard Eligible bit

CONCLUSION

 

1) Frame Relay Connectivity

1-a) FR End-devices

Both R1 and R2 are connected to the Frame Relay cloud using point-to-point sub-interfaces, where local dlci’s are assigned and inverse-arp doesn’t mind because there is only one PVC in the other side thereby it is disabled, as depicted in the mindmap (figure 2).

Figure2:Inverse-ARP configuration

R1 (receiver):

interface Serial0/0
no ip address
encapsulation frame-relay

load-interval 30

no fair-queue

frame-relay traffic-shaping

no frame-relay inverse-arp

frame-relay lmi-type ansi

!

interface Serial0/0.201 point-to-point

ip address 172.16.0.201 255.255.0.0

frame-relay interface-dlci 201

load-interval 30

R2 (sender):

interface Serial0/0
no ip address
encapsulation frame-relay

load-interval 30

no fair-queue

frame-relay traffic-shaping

no frame-relay inverse-arp

frame-relay lmi-type ansi

!

interface Serial0/0.102 point-to-point

ip address 172.16.0.102 255.255.0.0

frame-relay interface-dlci 102

load-interval 30

 

1-b) FR switch

As follow the commands used to establish connectivity in the FR switch

FRS(config)#frame-relay switching Enable frame relay switching globally on a router.
FRS(config)#connect <name> <interface> <dlci> <interface> <dlci> – Set connections between FR PVCs (similar to “frame-relay route” command).- With FR switching, FR traffic shaping is not applied to routed PVCs (using “frame-relay route” command).
FRS(config-if)#frame-relay interface-dlci <dlci> switched Needed to directly associate FR map to a specific switched PVC
clockrate <bps> Because it is a lab environment and routers are connected through back-to-back serial cables, connected interfaces on FR switch are required be set as DCE to supply clocking on the line to customer DTE devices

Here is the configuration of the FR switch

FRS:

interface Serial1/0encapsulation frame-relayclockrate 64000

frame-relay traffic-shaping

frame-relay interface-dlci 201 switched

frame-relay lmi-type ansi

frame-relay intf-type dce

!

interface Serial1/1

encapsulation frame-relay

clockrate 128000

frame-relay interface-dlci 102 switched

frame-relay lmi-type ansi

frame-relay intf-type dce

!

frame-relay switching

connect 102_201 Serial1/1 102 Serial1/0 201

 

1) Shaping queue and congestion on FR end device

Before moving on here a short explanation of the main commands used with FR traffic shaping and policing:

PVC-basis 

Interface-basis 

 
interface Serial <X/X>frame-relay [traffic-shaping | policing] Enable traffic shaping or policing on the physical interface
frame-relay interface-dlci <dlci> switchedclass <mapclassname> interface Serial <X/X>frame-relay class <mapclassname> Assign the map-class directly to the PVC
map-class frame-relay <mapclassname> Define the map-class
frame-relay cir <bps>frame-relay bc <bits>frame-relay be <bits> Define shaping parameters
frame-relay holdq <max_size> Hold-queue <max_size> [out|in] Set the size of the shaping queue, 40 by default. The router start filling it when the interface is experiencing congestion, the bigger, the more delay; the lower, the more congestion and deleted packets.
frame-relay congestion threshold ecn <%> frame-relay congestion-managementthreshold ecn [bc|be] <%> Set the congestion management threshold in the shaping queue, if reached the router will mark returning traffic to the sender with BECN and if no returning traffic, the router send FECN packets to the receiver that in turns will generate Q.922 packets towards the sender that will be marked with FECN.
frame-relay congestion threshold de <%> frame-relay congestion-managementthreshold de <%> Set the congestion management threshold in the shaping queue, if reached the router will start deleting packets marked with “DE”

With frame relay shaping enabled and map class assigned to a PVC, shaping queue congestion, packet dropping and delay depend on many factors like the traffic rate , shaping queue size (holdq) and shaping parameters

1-a) Traffic rate:

To observe the effect of traffic rate on shaping queue congestion on R2, we will fix the shaping parameters on R2 (sender) to the maximum of the interface clock rate 128 kbps with no excess burst to allow the shaping queue to function at its full capacity.

Parameters 

FR End-device, sender 

Shaping (outbound) 

CIR 

128kbps 

Bc=CIR*Tc

16000 

Tc 

125ms 

Be 

  • Traffic is generated at the rate of 64kbps —> the shaping queue is not congested:
R2#sh int s0/0 | i output rate

30 second output rate 64000 bits/sec, 5 packets/sec
R2#sh frame pvc 102

 

PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 102, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.102

 

input pkts 1178 output pkts 1368 in bytes 5164

out bytes 2048886 dropped pkts 0 in FECN pkts 0

in BECN pkts 1177 out FECN pkts 0 out BECN pkts 0

in DE pkts 0 out DE pkts 0

out bcast pkts 5 out bcast bytes 1660

5 minute input rate 0 bits/sec, 5 packets/sec

5 minute output rate 57000 bits/sec, 5 packets/sec

pvc create time 00:57:16, last time pvc status changed 00:55:46


cir 128000
bc 16000
be 0 byte limit 2000 interval 125

mincir 64000 byte increment 2000 Adaptive Shaping none

pkts 1368 bytes 2048886 pkts delayed 0 bytes delayed 0


shaping inactive

traffic shaping drops 0

Queueing strategy: fifo


Output queue 0/40, 0 drop, 0 dequeued

R2#

  • Traffic is generated at the rate of 128kbps —> shaping queue is congested and packets are queued, the queue can still support the traffic rate
R2#sh int s0/0 | i output rate

30 second output rate 124000 bits/sec, 10 packets/sec
R2#sh frame pvc 102

 

PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 102, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.102

 

input pkts 398 output pkts 814 in bytes 1846

out bytes 1219010 dropped pkts 0 in FECN pkts 0

in BECN pkts 398 out FECN pkts 0 out BECN pkts 0

in DE pkts 0 out DE pkts 0

out bcast pkts 1 out bcast bytes 332

5 minute input rate 0 bits/sec, 5 packets/sec

5 minute output rate 40000 bits/sec, 10 packets/sec

pvc create time 01:11:26, last time pvc status changed 01:09:57


cir 128000
bc 16000
be 0 byte limit 2000 interval 125

mincir 64000 byte increment 2000 Adaptive Shaping none

pkts 798 bytes 1194978 pkts delayed 798 bytes delayed 1194978


shaping active

traffic shaping drops 0

Queueing strategy: fifo


Output queue 20/40, 0 drop, 798 dequeued

R2#

  • Traffic is generated at the rate of 180kbps higher that the interface clock rate —> the shaping queue become congested and can no more support the traffic rate, therefore begin dropping packets:
R2#sh frame pvc 102
 PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 102, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.102

 

input pkts 2940 output pkts 7786 in bytes 12080

out bytes 11673976 dropped pkts 1920 in FECN pkts 0

in BECN pkts 2939 out FECN pkts 0 out BECN pkts 0

in DE pkts 0 out DE pkts 0

out bcast pkts 9 out bcast bytes 2988

5 minute input rate 0 bits/sec, 5 packets/sec


5 minute output rate 180000 bits/sec, 15 packets/sec

pvc create time 01:19:34, last time pvc status changed 01:18:04


cir 128000
bc 16000
be 0 byte limit 2000 interval 125

mincir 64000 byte increment 2000 Adaptive Shaping none

pkts 5823 bytes 8725550 pkts delayed 5820 bytes delayed 8721044


shaping active

traffic shaping drops 0

Queueing strategy: fifo


Output queue 40/40, 1927 drop, 5820 dequeued

R2#

 

1-b) Shaping queue size:

Now let’s try to increase the shaping queue size (holdq) up to 100 —> this can resolve the problem of dropping but packet will take more and more time to be dequeued and the delay will blow up and this is inacceptable for delay sensitive traffic.

R2:

R2#sh frame pvc 102
 PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 102, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.102

 

input pkts 5200 output pkts 14247 in bytes 21319

out bytes 21362590 dropped pkts 3801 in FECN pkts 0

in BECN pkts 5199 out FECN pkts 0 out BECN pkts 0

in DE pkts 0 out DE pkts 0

out bcast pkts 16 out bcast bytes 5312

5 minute input rate 0 bits/sec, 5 packets/sec

5 minute output rate 184000 bits/sec, 15 packets/sec

pvc create time 01:26:38, last time pvc status changed 01:25:08


cir 128000
bc 16000
be 0 byte limit 2000 interval 125

mincir 64000 byte increment 2000 Adaptive Shaping none

pkts 10345 bytes 15501786 pkts delayed 10342 bytes delayed 15497280


shaping active

traffic shaping drops 0

Queueing strategy: fifo


Output queue 55/100, 0 drop, 123 dequeued

R2#

 

1-c) Shaping parameters:

To observe the effect of FR shaping parameters on shaping queue congestion, we will fix the generated traffic rate to 64 kbps and the shaping queue size on R2 to the default value of 40.

  • These are the shaping parameters applied to PVC 102 on R2:
Parameters 

FR End-device, sender 

Shaping (outbound) 

CIR 

128kbps

Bc=CIR*Tc

8000

Tc 

125ms 

Be 

With traffic rate of 64 kbps and CIR set to the clock rate capacity, there is no congestion:

R2:

R2(config-map-class)#do sh int s0/0 | i output rate

30 second output rate 64000 bits/sec, 5 packets/sec
R2(config-map-class)#do sh frame pvc 102

 

PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 102, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.102

 

input pkts 2 output pkts 216 in bytes 509

out bytes 321984 dropped pkts 0 in FECN pkts 0

in BECN pkts 0 out FECN pkts 0 out BECN pkts 0

in DE pkts 0 out DE pkts 0

out bcast pkts 0 out bcast bytes 0

5 minute input rate 0 bits/sec, 0 packets/sec

5 minute output rate 15000 bits/sec, 5 packets/sec

pvc create time 03:16:46, last time pvc status changed 03:15:16


cir 128000
bc 16000
be 0 byte limit 2000 interval 125

mincir 64000 byte increment 2000 Adaptive Shaping none

pkts 216 bytes 321984 pkts delayed 0 bytes delayed 0


shaping inactive

traffic shaping drops 0

Queueing strategy: fifo


Output queue 0/40, 0 drop, 0 dequeued

R2(config-map-class)#

  • Now with a CIR equal to the traffic rate 64kbps, the shaping queue is experiencing congestion but still no dropping:
Parameters 

FR End-device, sender 

Shaping (outbound) 

CIR 

64kbps 

Bc=CIR*Tc 

8000 

Tc 

125ms 

Be 

R2:

R2(config-map-class)#do sh int s0/0 | i output rate

30 second output rate 64000 bits/sec, 5 packets/sec
R2(config-map-class)#do sh frame pvc 102

 

PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 102, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.102

 

input pkts 2 output pkts 367 in bytes 658

out bytes 550064 dropped pkts 0 in FECN pkts 0

in BECN pkts 0 out FECN pkts 0 out BECN pkts 0

in DE pkts 0 out DE pkts 0

out bcast pkts 1 out bcast bytes 332

5 minute input rate 0 bits/sec, 0 packets/sec

5 minute output rate 21000 bits/sec, 5 packets/sec

pvc create time 03:26:19, last time pvc status changed 03:24:50


cir 64000
bc 8000
be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 358 bytes 536546 pkts delayed 358 bytes delayed 536546


shaping active

traffic shaping drops 0

Queueing strategy: fifo


Output queue 13/40, 0 drop, 358 dequeued

R2(config-map-class)#

  • Now with a shaping rate capacity less that the traffic rate the shaping is queue is congested to the maximum and start dropping packets that can’t handle:
Parameters 

FR End-device, sender

Shaping (outbound) 

CIR 

32kbps 

Bc=CIR*Tc 

4000 

Tc 

125ms 

Be 

R2:

R2(config-map-class)#do sh frame pvc 102
 PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 102, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.102

 

input pkts 26 output pkts 3893 in bytes 6340

out bytes 5818010 dropped pkts 203 in FECN pkts 0

in BECN pkts 0 out FECN pkts 0 out BECN pkts 0

in DE pkts 0 out DE pkts 0

out bcast pkts 12 out bcast bytes 3984

5 minute input rate 0 bits/sec, 0 packets/sec


5 minute output rate 65000 bits/sec, 5 packets/sec

pvc create time 03:37:08, last time pvc status changed 03:35:39


cir 32000
bc 4000 be 0 byte limit 500 interval 125

mincir 16000 byte increment 500 Adaptive Shaping none

pkts 3652 bytes 5457198 pkts delayed 3652 bytes delayed 5457198


shaping active

traffic shaping drops 0

Queueing strategy: fifo


Output queue 40/40, 205 drop, 3652 dequeued

R2(config-map-class)#

 

2) Shaping queue and congestion on FR Switch

Exactly the same concepts and observation as in an FR end-device, with the exception that the FR switch can manage the congestion, which will be treated in the next point.

3) FR Congestion management, FECN/BECN

The congestion management ecn threshold of 10% means that if the queue is filled up to 10 % the FR switch mark any returning traffic to the sender with BECN (Backward Explicit Congestion Notification) to notify the sender that its shaping rate is too high and it should be throttled back, each time the sender receive a BECN it decrease the CIR by 25%, but will never go below mincir (figure3 – first case).

If there is no traffic back to the sender from the receiver, the FR switch mark sender traffic with FECN (Forward Explicit Congestion Notification) to trigger the receiver to send back special q.922 packet toward the sender with BECN bit set (figure3 – second case).

Figure 3: FR FECN/BECN concept

Here is the summary of shaping parameters set in the network to test ECN:

Parameters 

FR End-device, sender 

FR switch 

FR End-device, receiver 

Shaping (outbound) 

Policing (inbound) 

Shaping (outbound) 

Shaping (inbound) 

CIR 

128kbps

(mincir=48kbps)

– 

64kbps

Congestion management

– 

ENC 

10%

Bc=CIR*Tc

16000 

– 

8000

– 

Tc 

125ms 

– 

125ms

– 

Be 

– 

0

– 

R2(sender):

interface Serial0/0.102 point-to-pointclass ADAPT_R2!

map-class frame-relay ADAPT_R2

frame-relay adaptive-shaping becn

frame-relay cir 128000

frame-relay mincir 48000

FRS:

map-class frame-relay SHAPE_R1frame-relay cir 64000frame-relay bc 8000

frame-relay be 0

frame-relay holdq 40

frame-relay congestion threshold ecn 10

!

interface Serial1/0

frame-relay interface-dlci 201 switched

class SHAPE_R1 

R1(receiver):

interface Serial0/0.201 point-to-pointframe-relay interface-dlci 201class FECN_REACT

!

map-class frame-relay FECN_REACT

frame-relay fecn-adapt

frame-relay cir 64000

frame-relay bc 8000

frame-relay be 0 

In our example, the ECN (Explicit Congestion Notification) is set to 10%, consequently if the FR switch PVC 102 shaping queue is filled up to 10% (4 out of 40) traffic is marked with FECN/BECN.

  • With a traffic rate of 48 kbps, the FR switch output interface queue size doesn’t reach the ECN threshold so the congestion mechanism is not triggered:

FRS:

FRS_QOS(config-if)#do sh frame pvc 201
 PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 98 output pkts 488 in bytes 6802

out bytes 593758 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0


in FECN pkts 0
in BECN pkts 0
out FECN pkts 0


out BECN pkts 0 in DE pkts 0 out DE pkts 0

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 1 packets/sec

30 second output rate 48000 bits/sec, 5 packets/sec

switched pkts 98

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 0 pkt above DE 0 policing drop 0

pvc create time 00:19:46, last time pvc status changed 00:18:27


Congestion ECN threshold 4, not congested

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 493 bytes 599830 pkts delayed 51 bytes delayed 3264


shaping inactive

traffic shaping drops 0

Queueing strategy: fifo


Output queue 0/40, 0 drop, 53 dequeued

FRS_QOS(config-if)#

  • With a traffic rate of 64 kbps, the FRS output queue already experiencing congestion and reach the ECN threshold triggering the congestion management mechanism:

FRS:

FRS_QOS(config-if)#do sh frame pvc 201
 PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 43 output pkts 144 in bytes 1675

out bytes 186358 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0

in FECN pkts 0 in BECN pkts 22
out FECN pkts 22

out BECN pkts 0 in DE pkts 0 out DE pkts 0

out bcast pkts 0 out bcast bytes 0

30 second input rate 2000 bits/sec, 1 packets/sec

30 second output rate 64000 bits/sec, 5 packets/sec

switched pkts 43

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 0 pkt above DE 0 policing drop 0

pvc create time 00:26:22, last time pvc status changed 00:25:04


Congestion ECN threshold 4, congested

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 152 bytes 198374 pkts delayed 130 bytes delayed 166768


shaping active

traffic shaping drops 0

Queueing strategy: fifo


Output queue 5/40, 0 drop, 137 dequeued

FRS_QOS#

The FRS output interface shaping queue has reached ECN threshold and marked 22 packets with FECN bit toward the receiver and consequently has received from it 22 packets marked with BECN toward the sender.

R1 (receiver):

R1#sh frame pvc 201
 PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 201, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.201

 

input pkts 4061 output pkts 1076 in bytes 4702782

out bytes 76079 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0


in FECN pkts 78 in BECN pkts 0 out FECN pkts 0


out BECN pkts 78 in DE pkts 0 out DE pkts 0

out bcast pkts 26 out bcast bytes 8545

5 minute input rate 59000 bits/sec, 5 packets/sec

5 minute output rate 0 bits/sec, 0 packets/sec

pvc create time 00:25:34, last time pvc status changed 00:25:34

cir 64000 bc 8000 be 0 byte limit 1000 interval 125


mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 1076 bytes 76079 pkts delayed 0 bytes delayed 0

shaping inactive

traffic shaping drops 0

Queueing strategy: fifo

Output queue 0/40, 0 drop, 0 dequeued

R1#

The receiver receive FECN marked packets and respond with q.922 packets with BECN bit set.

R2 (sender):

R2#sh frame pvc 102
 PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 102, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.102

 

input pkts 1109 output pkts 4140 in bytes 76758

out bytes 4808531 dropped pkts 0 in FECN pkts 0


in BECN pkts 109 out FECN pkts 0 out BECN pkts 0

in DE pkts 0 out DE pkts 0

out bcast pkts 29 out bcast bytes 9619

5 minute input rate 0 bits/sec, 1 packets/sec

5 minute output rate 64000 bits/sec, 5 packets/sec


Shaping adapts to BECN

pvc create time 00:26:58, last time pvc status changed 00:25:29

cir 128000 bc 128000 be 0 byte limit 2000 interval 125

mincir 48000 byte increment 750 Adaptive Shaping BECN

pkts 4137 bytes 4804025 pkts delayed 292 bytes delayed 399128


shaping active

traffic shaping drops 0

Queueing strategy: fifo

Output queue 3/40, 0 drop, 292 dequeued

R2#

With adaptive shaping activated, the sender has received BECN marked packets.

 

4) FR Congestion management, Discard Eligible bit

Another concept of the FR congestion management is Frame Relay policing applied inbound (shaping is applied outbound).

Policing doesn’t perform any queuing and prevent network congestion by either marking traffic “DE” bit (Discard Eligible) or discard traffic exceeding FR traffic policing parameters

Inbound interface policing:

  • If (traffic < Bc) then traffic is transmitted to the outbound interface
  • If (Bc< traffic < Be) then traffic is market with “DE” bit and transmitted to the outbound interface
  • If (traffic > Be) then traffic is dropped

outbound interface shaping and congestion management:

  • If (packets in the shaping queue < DE threshold) packets are transmitted
  • If (packets in the shaping queue > DE threshold) packets with “DE” bit marked are dropped

Here is the summary of congestion management parameters for DE testing:

Parameters 

FR End-device, sender 

FR switch 

FR End-device, receiver 

Shaping (outbound) 

Policing (inbound) 

Shaping (outbound) 

Shaping (inbound) 

CIR 

48kbps

64kbps

Congestion management 

– 

DE

70%

Bc=CIR*Tc

6000

8000

– 

Tc 

125ms

125ms

– 

Be 

8000

0

– 

Configuration on FR switch:

map-class frame-relay SHAPE_R1
frame-relay cir 64000
frame-relay bc 8000

frame-relay be 0

frame-relay holdq 40


frame-relay congestion threshold de 70

!

map-class frame-relay POLICE_R1

frame-relay cir 48000

frame-relay bc 6000

frame-relay be 8000

!

interface Serial1/0

frame-relay traffic-shaping

frame-relay interface-dlci 201 switched

class SHAPE_R1

!

interface Serial1/1

frame-relay interface-dlci 102 switched


class POLICE_R1

frame-relay policing

  • Traffic is generated at the rate of 64 kbps

FRS (inbound PVC, policing):

FRS_QOS(config-map-class)#do sh frame pvc 102
 PVC Statistics for interface Serial1/1 (Frame Relay DCE)

 

DLCI = 102, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/1

 

input pkts 352 output pkts 12 in bytes 511716

out bytes 1033 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 0

out bcast pkts 0 out bcast bytes 0

30 second input rate 60000 bits/sec, 5 packets/sec

30 second output rate 0 bits/sec, 0 packets/sec

switched pkts 352

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 0 pkt above DE 0 policing drop 0

pvc create time 05:14:01, last time pvc status changed 05:12:32

policing enabled, 347 pkts marked DE

policing Bc 6000
policing Be 8000 policing Tc 125 (msec)

in Bc pkts 12
in Be pkts 347
in xs pkts 0

in Bc bytes 1036
in Be bytes 521194
in xs bytes 0

FRS_QOS(config-map-class)#

The FR switch outbound PVC queue is experiencing congestion and the queue threshold is reached (> 28 out of 40) so the output interface drops all packets marked with “DE” as show in the following output:

FRS (outbound PVC, shaping):

FRS_QOS(config-map-class)#do sh frame pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 16 output pkts 442 in bytes 1554

out bytes 641412 dropped pkts 5 in pkts dropped 0

out pkts dropped 5 out bytes dropped 7510

late-dropped out pkts 5 late-dropped out bytes 7510

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 431

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec

30 second output rate 61000 bits/sec, 5 packets/sec

switched pkts 16

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 5 pkt above DE 0 policing drop 0

pvc create time 05:14:19, last time pvc status changed 05:12:59

Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 442 bytes 641412 pkts delayed 442 bytes delayed 641412

shaping active

traffic shaping drops 0

Queueing strategy: fifo

Output queue 28/40, 5 drop, 446 dequeued

FRS_QOS(config-map-class)# 

Note: Policing with “DE” marking can be deployed on the customer FR end-device using class-based QoS.

I resumed all previously mentioned concepts in the following figure4:

Figure 4: Shaping and congestion mechanism

CONCLUSION

  • Use “connect” command on the FR switch to set the connection between switched FR PVCs so you can apply FR shaping and policing.
  • Pay attention to where you apply your shaping or policing policy, applied at PVC level will take precedence over the one applied at the sub-interface level, which in turns take precedence over the one applied at the main interface level. This is crucial when your are deploying multipoint sub-interfaces with several PVC’s on customer devices.
  • The FR switch is able to detect “DE” marked traffic and discard it in case of congestion of the output queue, “DE” traffic can be marked by policing at the FR switch inbound interface (if traffic is higher than Bc and lower than Be), this enable the service provider to reinforce the customer service contract; and by the sender FR customer device as well, by marking low priority traffic to be discarded first when congestion is experienced on the FR switch.
  • Traffic congestion, delay and dropping in the FR switch depend on the traffic rate, the shaping queue size and the shaping parameters.

Multicast over FR NBMA part4 – (multipoint GRE and DMVPN)


This is the fourth part of the document “Multicast over FR NBMA”, this lab focus on deploying multicast over multipoint GRE and DMVPN.

The main advantage of GRE tunneling is its transportation capability, non-ip, broadcast and multicast traffic can be encapsulated inside the unicast GRE which is easily transmitted over Layer2 technologies such Frame Relay and ATM.

Because HUB, SpokeA and SpokeB FR interfaces are in multipoint, we will use multipoint GRE.

Figure1 : lab topology


CONFIGURATION

mGRE configuration:

HUB:

interface Tunnel0
ip address 172.16.0.1 255.255.0.0
no ip redirects

!!PIM sparse-dense mode is enabled on the tunnel not on the physical interface


ip pim sparse-dense-mode

!! a shared key is used for tunnel authentication


ip nhrp authentication cisco

!!The HUB must send all multicast traffic to all spokes that has registered to it


ip nhrp map multicast dynamic

!! Enable NHRP on the interface, must be the same for all participants


ip nhrp network-id 1

!!Because the OSPF network type is broadcast a DR will be elected, so the HUB is assigned the biggest priority to be sure that it will be the DR


ip ospf network broadcast


ip ospf priority 10

!! With small HUB and Spoke networks it is possible to configure static mGRE by pre-configuring the tunnel destination, but will not be able to set the tunnel mode


tunnel source Serial0/0


tunnel mode gre multipoint

!! Set the tunnel identification key and must be identical to the network-id previously configured


tunnel key 1

FR configuration:

interface Serial0/0
ip address 192.168.100.1 255.255.255.0
encapsulation frame-relay

serial restart-delay 0

frame-relay map ip 192.168.100.2 101
broadcast

frame-relay map ip 192.168.100.3 103
broadcast

no frame-relay inverse-arp

Routing configuration:

router ospf 10
router-id 1.1.1.1
network 10.10.20.0 0.0.0.255 area 100


network 172.16.0.0 0.0.255.255 area 0

SpokeA:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.2 255.255.0.0
ip nhrp authentication cisco

!!All multicast traffic will be forwarded to the NBMA next hop IP (HUB).

ip nhrp map multicast 192.168.100.1

!!All spokes know in advance the HUB NBMA and tunnel IP addresses which are static.

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network point-to-multipoint

tunnel source Serial0/0.201

tunnel destination 192.168.100.1

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.201 multipoint

ip address 192.168.100.2 255.255.255.0

frame-relay map ip 192.168.100.1 201 broadcast

Routing configuration:

router ospf 10
router-id 200.200.200.200
network 20.20.20.0 0.0.0.255 area 200

network 172.16.0.0 0.0.255.255 area 0

SpokeB:

mGRE configuration:

interface Tunnel0
ip address 172.16.0.3 255.255.0.0
no ip redirects

ip pim sparse-dense-mode

ip nhrp authentication cisco

ip nhrp map multicast 192.168.100.1

ip nhrp map 172.16.0.1 192.168.100.1

ip nhrp network-id 1

ip nhrp nhs 172.16.0.1

ip ospf network broadcast

ip ospf priority 0

tunnel source Serial0/0.301

tunnel mode gre multipoint

tunnel key 1

FR configuration:

interface Serial0/0
no ip address
encapsulation frame-relay

serial restart-delay 0

no frame-relay inverse-arp

 

interface Serial0/0.301 multipoint

ip address 192.168.100.3 255.255.255.0

frame-relay map ip 192.168.100.1 301 broadcast

Routing configuration:

router ospf 10
router-id 3.3.3.3

network 172.16.0.0 0.0.255.255 area 0

network 192.168.39.0 0.0.0.255 area 300

RP (SpokeBnet):

interface Loopback0
ip address 192.168.38.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 192.168.38.1 0.0.0.0 area 300

ip pim send-rp-announce Loopback0 scope 32

Mapping Agent (HUBnet):

interface Loopback0
ip address 10.0.0.1 255.255.255.255

ip pim sparse-dense-mode

router ospf 10

network 10.0.0.1 0.0.0.0 area 100

ip pim send-rp-discovery Loopback0 scope 32

Here is the result:

HUB:

HUB# sh ip nhrp
172.16.0.2/32 via 172.16.0.2, Tunnel0 created 01:06:52, expire 01:34:23
Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.2

172.16.0.3/32 via 172.16.0.3, Tunnel0 created 01:06:35, expire 01:34:10

Type: dynamic, Flags: authoritative unique registered

NBMA address: 192.168.100.3

HUB#

The HUB has dynamically learnt spoke’s NBMA addresses and corresponding tunnel ip addresses.

HUB#sh ip route
Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area

N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2

E1 – OSPF external type 1, E2 – OSPF external type 2

i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2

ia – IS-IS inter area, * – candidate default, U – per-user static route

o – ODR, P – periodic downloaded static route

 

Gateway of last resort is not set

 

1.0.0.0/32 is subnetted, 1 subnets

C 1.1.1.1 is directly connected, Loopback0

20.0.0.0/32 is subnetted, 1 subnets

O IA 20.20.20.20 [110/11112] via 172.16.0.2, 01:08:26, Tunnel0

O IA 192.168.40.0/24 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

C 172.16.0.0/16 is directly connected, Tunnel0

192.168.38.0/32 is subnetted, 1 subnets

O IA 192.168.38.1 [110/11113] via 172.16.0.3, 01:08:26, Tunnel0

O IA 192.168.39.0/24 [110/11112] via 172.16.0.3, 01:08:26, Tunnel0

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O 10.0.0.2/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.10.10.0/24 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

O 10.0.0.1/32 [110/2] via 10.10.20.3, 01:09:06, FastEthernet1/0

C 10.10.20.0/24 is directly connected, FastEthernet1/0

C 192.168.100.0/24 is directly connected, Serial0/0

HUB#

The HUB has learnt all spokes local network ip addresses; note that all learnt routes points to the tunnel IP addresses, because the routing protocol is enabled on the top of the logical topology not the physical (figure2).

Figure2 : Logical topology

HUB#sh ip pim neighbors
PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR

Address Prio/Mode

172.16.0.3
Tunnel0 01:06:17/00:01:21 v2 1 / DR S

172.16.0.2
Tunnel0 01:06:03/00:01:40 v2 1 / S

10.10.20.3 FastEthernet1/0 01:07:24/00:01:15 v2 1 / DR S

HUB#

PIM neighbor relationships are established after enabling PIM-Sparse-dense mode on tunnel interfaces.

SpokeBnet#
*Mar 1 01:16:22.055: Auto-RP(0): Build RP-Announce for 192.168.38.1, PIMv2/v1, ttl 32, ht 181
*Mar 1 01:16:22.059: Auto-RP(0): Build announce entry for (224.0.0.0/4)

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet0/0

*Mar 1 01:16:22.063: Auto-RP(0): Send RP-Announce packet on FastEthernet1/0

*Mar 1 01:16:22.067: Auto-RP: Send RP-Announce packet on Loopback0

SpokeBnet#

The RP (SpokeBnet) send RP-announces to all those who listen to 224.0.1.39

Hubnet#
*Mar 1 01:16:17.039: Auto-RP(0): Received RP-announce, from 192.168.38.1, RP_cnt 1, ht 181
*Mar 1 01:16:17.043: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

Hubnet#

*Mar 1 01:16:49.267: Auto-RP(0): Build RP-Discovery packet

*Mar 1 01:16:49.271: Auto-RP: Build mapping (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1,

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)

*Mar 1 01:16:49.275: Auto-RP(0): Send RP-discovery packet on FastEthernet1/0 (1 RP entries)

Hubnet#

HUBnet, the mapping agent (MA), listening to 224.0.1.39, has received RP-announces from the RP (SpokeBnet), has updated its records and has sent RP-Discovery to all PIM-SM routers at 224.0.1.40

HUB#
*Mar 1 01:16:47.059: Auto-RP(0): Received RP-discovery, from 10.0.0.1, RP_cnt 1, ht 181
*Mar 1 01:16:47.063: Auto-RP(0): Update (224.0.0.0/4, RP:192.168.38.1), PIMv2 v1

HUB#

 

HUB#sh ip pim rp
Group: 239.255.1.1, RP: 192.168.38.1, v2, v1, uptime 01:11:49, expires 00:02:44
HUB#

The HUB, as an example, has received the RP-to-group mapping information from the Mapping agent and now know the RP IP address.

Now let’s take a look at the multicast routing table of the RP:

SpokeBnet#sh ip mroute
IP Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group, C – Connected,

L – Local, P – Pruned, R – RP-bit set, F – Register flag,

T – SPT-bit set, J – Join SPT, M – MSDP created entry,

X – Proxy Join Timer Running, A – Candidate for MSDP Advertisement,

U – URD, I – Received Source Specific Host Report,

Z – Multicast Tunnel, z – MDT-data group sender,

Y – Joined MDT-data group, y – Sending to MDT-data group

Outgoing interface flags: H – Hardware switched, A – Assert winner

Timers: Uptime/Expires

Interface state: Interface, Next-Hop or VCD, State/Mode

 

(*, 239.255.1.1), 00:39:00/stopped, RP 192.168.38.1, flags: SJC

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(10.10.10.1, 239.255.1.1), 00:39:00/00:02:58, flags: T


Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1


Outgoing interface list:


FastEthernet1/0, Forward/Sparse-Dense, 00:38:22/00:02:25

 

(*, 224.0.1.39), 01:24:31/stopped, RP 0.0.0.0, flags: D

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(192.168.38.1, 224.0.1.39), 01:24:31/00:02:28, flags: T

Incoming interface: Loopback0, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:24:31/00:00:00

 

(*, 224.0.1.40), 01:25:42/stopped, RP 0.0.0.0, flags: DCL

Incoming interface: Null, RPF nbr 0.0.0.0

Outgoing interface list:

FastEthernet0/0, Forward/Sparse-Dense, 01:25:40/00:00:00

Loopback0, Forward/Sparse-Dense, 01:25:42/00:00:00

 

(10.0.0.1, 224.0.1.40), 01:23:39/00:02:51, flags: LT

Incoming interface: FastEthernet0/0, RPF nbr 192.168.39.1

Outgoing interface list:

Loopback0, Forward/Sparse-Dense, 01:23:39/00:00:00

 

SpokeBnet#

(*, 239.255.1.1) – The shared tree, rooted at the RP, used to push multicast traffic to receivers, “J” flag indicates that traffic has switched from RPT to SPT.

(10.10.10.1, 239.255.1.1) – SPT used to forward traffic from the source to the receiver, receive traffic on Fa0/0 ans forward it out of Fa1/0.

(*, 224.0.1.39) and (*, 224.0.1.40) – service group multicast, because it is a PIM sparse-dense mode, traffic for these groups were forwarded to all PIM routers using dense mode, hence the flag “D”.

This way we configured multicast over NBMA using mGRE, no layer2, no restrictions.

By the way, we are just one step far from DMVPN 🙂 all we have to do is configure IPSec VPN that will protect our mGRE tunnel, so let’s do it!

!! IKE phase I parameters
crypto isakmp policy 1
!! 3des as the encryption algorithm

encryption 3des

!! authentication type: simple preshared keys

authentication pre-share

!! Diffie Helman group2 for the exchange of the secret key

group 2

!! isakmp pees are not set because the HUB doesn’t know them yet, they are learned dynamically by NHRP within mGRE

crypto isakmp key cisco address 0.0.0.0 0.0.0.0

crypto ipsec transform-set MyESP-3DES-SHA esp-3des esp-sha-hmac

mode transport

crypto ipsec profile My_profile

set transform-set MyESP-3DES-SHA

 

int tunnel 0

tunnel protection ipsec profile My_profile

 

HUB#sh crypto isakmp sa
dst src state conn-id slot status
192.168.100.1 192.168.100.2 QM_IDLE 2 0 ACTIVE

192.168.100.1 192.168.100.3 QM_IDLE 1 0 ACTIVE

 

HUB#

 

HUB#sh crypto ipsec sa
 interface: Tunnel0

Crypto map tag: Tunnel0-head-0, local addr 192.168.100.1

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.2/255.255.255.255/47/0)


current_peer 192.168.100.2 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1248, #pkts encrypt: 1248, #pkts digest: 1248

#pkts decaps: 129, #pkts decrypt: 129, #pkts verify: 129

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 52, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.2

path mtu 1500, ip mtu 1500

current outbound spi: 0xCEFE3AC2(3472767682)

 

inbound esp sas:


spi: 0x852C8AE0(2234288864)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2003, flow_id: SW:3, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4448676/3482)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xCEFE3AC2(3472767682)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2004, flow_id: SW:4, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4447841/3479)

IV size: 8 bytes

replay detection support: Y

Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

 

protected vrf: (none)

local ident (addr/mask/prot/port): (192.168.100.1/255.255.255.255/47/0)

remote ident (addr/mask/prot/port): (192.168.100.3/255.255.255.255/47/0)

current_peer 192.168.100.3 port 500

PERMIT, flags={origin_is_acl,}

#pkts encaps: 1309, #pkts encrypt: 1309, #pkts digest: 1309

#pkts decaps: 23, #pkts decrypt: 23, #pkts verify: 23

#pkts compressed: 0, #pkts decompressed: 0

#pkts not compressed: 0, #pkts compr. failed: 0

#pkts not decompressed: 0, #pkts decompress failed: 0

#send errors 26, #recv errors 0

 


local crypto endpt.: 192.168.100.1, remote crypto endpt.: 192.168.100.3

path mtu 1500, ip mtu 1500

current outbound spi: 0xD5D509D2(3587508690)

 

inbound esp sas:


spi: 0x4507681A(1158113306)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2001, flow_id: SW:1, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4588768/3477)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

inbound ah sas:

 

inbound pcp sas:

 

outbound esp sas:


spi: 0xD5D509D2(3587508690)

transform: esp-3des esp-sha-hmac ,

in use settings ={Transport, }

conn id: 2002, flow_id: SW:2, crypto map: Tunnel0-head-0

sa timing: remaining key lifetime (k/sec): (4587889/3476)

IV size: 8 bytes

replay detection support: Y


Status: ACTIVE

 

outbound ah sas:

 

outbound pcp sas:

HUB#

ISAKMP and IPSec phases are successfully established and security associations are formed.

multicast over DMVPN works perfectly! That’s it!

Manual IPv6 GRE tunnel over IPv4


OVERVIEW

By definition, GRE is used to encapsulate IP/non-IP protocols into IPv4/IPv6, in the following lab we will encapsulate IPv6 into IPv4, so the outer packet has IPv4/6 source and destination addresses and the inner packet GRE, has IPv6 source and destination addresses (figure 1).

Figure 1: Packet encapsulation

tunneling

Figure 2 depicts the lab topology in which 3 sites: North, East, and West are isolated IPv6 sites and connected with each other over an IPv4 network with their respective border routers (dual-stack).

Each site establishes a FR point-to-point PVC to the other two with IPv4 as the network layer.

Figure 2: Topology

ipv6_GRE_top

This lab is organized as follow:

– Planning the address scheme.
– IPv6 address configuration.
         IPv6 connectivity check.
– FR configuration.
         FR connectivity check.
– Manual IPv6 GRE tunnel
         Tunnel configuration
– Connectivity check.

PLANNING THE ADDRESS SCHEME

Table 1:Addressing scheme:

2001:a:a:a::/64

Subnet used between BNorth and Northv6

2001:a:a:aa::64

North site Internal network

2001:b:b:b::/64

Subnet used between BWest and Westv6

2001:b:b:bb::/64

Worth site Internal network

2001:c:c:c::/64

Subnet used between BEast and Eastv6

2001:c:c:cc/64

East site Internal network

Tunnel’s IPv6 addressing

2001a:a:ab::/64

Tunnel between BNorth and BWest

2001:a:a:ac::/64

Tunnel between BNorth and Beast

2001:a:a:bc::/64

Tunnel between BWest and BEast

IPv4 NBMA addressing

192.168.13.0/24

NBMA subnet for point-to-point PVC between BNorth and BWest

192.168.32.0/24

NBMA subnet for point-to-point PVC between BEast and BWest

192.168.12.0/24

NBMA subnet for point-to-point PVC between BNorth and BEst

IPv6 ADDRESS CONFIGURATION

North Site:

Northv6:

!! Do not forget to enable IPv6 routing

ipv6 unicast-routing

!

!! loopback is used to simulate internal networks

!! interface Loopback0

ipv6 address 2001:A:A:AA::1/64

!

!! Interface that connect to the Border router

!! interface FastEthernet0/0

ipv6 address 2001:A:A:A::2/64

!

!! A default route will point to the next-hop (Border Router)

ipv6 route ::/0 2001:A:A:A::1

BNorth:

ipv6 unicast-routing

!

interface FastEthernet0/0

ipv6 address 2001:A:A:A::1/64

!! This a route to the internal network that points the internal router

ipv6 route 2001:A:A:AA::/64 2001:A:A:A::2

East Site:

Eastv6:

ipv6 unicast-routing

!

interface Loopback0

ipv6 address 2001:C:C:CC::1/64

!

interface FastEthernet1/0

ipv6 address 2001:C:C:C::2/64

!

ipv6 route ::/0 2001:C:C:C::1

BEast:

ipv6 unicast-routing

!

interface FastEthernet0/0

ipv6 address 2001:C:C:C::1/64

!

ipv6 route 2001:C:C:CC::/64 2001:C:C:C::2

West Site:

Westv6:

ipv6 unicast-routing

!

interface Loopback0

ipv6 address 2001:B:B:BB::1/64

!

interface FastEthernet0/0

ipv6 address 2001:B:B:B::2/64

!

ipv6 route ::/0 2001:B:B:B::1

BWest:

ipv6 unicast-routing

!

interface FastEthernet0/0

ipv6 address 2001:B:B:B::1/64

!

ipv6 route 2001:B:B:BB::/64 2001:B:B:B::2

IPv6 connectivity

BNorth(config)#do ping 2001:a:a:aa::1

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 2001:A:A:AA::1, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 36/56/92 ms

BNorth(config)#

BEast(config)#do ping 2001:c:c:cc::1

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 2001:C:C:CC::1, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 20/38/88 ms

BEast(config)#

BWest(config)#do ping 2001:b:b:bb::1

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 2001:B:B:BB::1, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 8/43/96 ms

BWest(config)#

FR CONFIGURATION

The configuration for point-to-point FR is very simple. Configure the ip address and the local DLCI, no need for neither inverse ARP nor static mapping as there is only one DLCI in the other side of the PVC.

BNorth:

interface Serial1/0

no ip address

encapsulation frame-relay

no frame-relay inverse-arp

!

interface Serial1/0.101 point-to-point

ip address 192.168.12.1 255.255.255.0

frame-relay interface-dlci 101

!

interface Serial1/0.102 point-to-point

ip address 192.168.13.1 255.255.255.0

frame-relay interface-dlci 102

BEst:

interface Serial1/0

no ip address

encapsulation frame-relay

no frame-relay inverse-arp

!

interface Serial1/0.110 point-to-point

ip address 192.168.12.2 255.255.255.0

frame-relay interface-dlci 110

!

interface Serial1/0.203 point-to-point

ip address 192.168.32.1 255.255.255.0

frame-relay interface-dlci 203

BWest:

interface Serial1/0

no ip address

encapsulation frame-relay

no frame-relay inverse-arp

!

interface Serial1/0.201 point-to-point

ip address 192.168.13.2 255.255.255.0

frame-relay interface-dlci 201

!

interface Serial1/0.302 point-to-point

ip address 192.168.32.2 255.255.255.0

frame-relay interface-dlci 302

FR connectivity check

BWest#ping 192.168.13.1

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 192.168.13.1, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 24/80/108 ms

BWest#

BWest#ping 192.168.32.1

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 192.168.32.1, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 40/72/120 ms

BWest#

IPv6 GRE

It is a point-to-point tunnel, so the tunnel source and destination are pre-configured, the logical topology is illustrated in figure3.

Figure3: Logical topology

ipv6_GRE_top_virt

Table2: Point-to-point GRE parameters on BNorth

Tunnelling parameters

BEast

BWest

Tunnel interface

Tunnel 12

Tunnel 13

Tunnel ip address &mask

2001:a:a:ac::1/64

2001:a:a:ab::1/64

Tunnel source interface

s0/0.101

s0/0.102

Tunnel destination

192.168.12.2

192.168.13.2

Tunnel mode

gre ip

BNorth:

interface Tunnel12

no ip address

ipv6 address 2001:A:A:AC::1/64

tunnel source Serial1/0.101

tunnel destination 192.168.12.2

tunnel mode gre ip

!

interface Tunnel13

no ip address

ipv6 address 2001:A:A:AB::1/64

tunnel source Serial1/0.102

tunnel destination 192.168.13.2

tunnel mode gre ip

!

!!Each Border router will be configured with static routes to other sites !!internal networks

ipv6 route 2001:B::/32 Tunnel13

ipv6 route 2001:C::/32 Tunnel12

BEst:

Table2: Point-to-point GRE parameters on BEast

Tunnelling parameters

BNorth

BWest

Tunnel interface

Tunnel 21

Tunnel 23

Tunnel ip address &mask

2001:a:a:ac::2/64

2001:a:a:bc::1/64

Tunnel source interface

s0/0.110

s0/0.203

Tunnel destination

192.168.12.1

192.168.32.1

Tunnel mode

gre ip

interface Tunnel21

no ip address

ipv6 address 2001:A:A:AC::2/64

tunnel source Serial1/0.110

tunnel destination 192.168.12.1

tunnel mode gre ip

!

interface Tunnel23

no ip address

ipv6 address 2001:A:A:BC::1/64

tunnel source Serial1/0.203

tunnel destination 192.168.32.2

tunnel mode gre ip

!

ipv6 route 2001:A::/32 Tunnel21

ipv6 route 2001:B::/32 Tunnel23

BWest:

Table2: Point-to-point GRE parameters on BWest

Tunnelling parameters

BEast

BNorth

Tunnel interface

Tunnel 32

Tunnel 31

Tunnel ip address &mask

2001:a:a:bc::2/64

2001:a:a:ab::2/64

Tunnel source interface

s0/0.302

s0/0.201

Tunnel destination

192.168.32.2

192.168.13.1

Tunnel mode

gre ip

interface Tunnel31

no ip address

ipv6 address 2001:A:A:AB::2/64

tunnel source Serial1/0.201

tunnel destination 192.168.13.1

tunnel mode gre ip

!

interface Tunnel32

no ip address

ipv6 address 2001:A:A:BC::2/64

tunnel source Serial1/0.302

tunnel destination 192.168.32.1

tunnel mode gre ip

!

ipv6 route 2001:A::/32 Tunnel31

ipv6 route 2001:C::/32 Tunnel32

CONNECTIVITY CHECK

For each destination site IPv6 traffic takes the corresponding tunnel and is encapsulated into a packet with that tunnel source and destination.

Northv6#trace 2001:c:c:cc::1

Type escape sequence to abort.

Tracing the route to 2001:C:C:CC::1

1 2001:A:A:A::1 36 msec 32 msec 48 msec

2 2001:A:A:AC::2 132 msec 72 msec 100 msec

3 2001:C:C:CC::1 148 msec 196 msec 120 msec

Northv6#trace 2001:b:b:bb::1

Type escape sequence to abort.

Tracing the route to 2001:B:B:BB::1

1 2001:A:A:A::1 40 msec 36 msec 12 msec

2 2001:A:A:AB::2 136 msec 136 msec 44 msec

3 2001:B:B:BB::1 136 msec 68 msec 188 msec

Northv6#

DEBUGGING:

Figure3: IPv6 GRE traffic capture

BNorth#
*Mar  1 00:41:39.911: ICMPv6: Sending ICMP timeout to 2001:A:A:A::2
*Mar  1 00:41:39.939: ICMPv6: Sending ICMP timeout to 2001:A:A:A::2
*Mar  1 00:41:39.943: ICMPv6: Sending ICMP timeout to 2001:A:A:A::2
*Mar  1 00:41:39.947: Tunnel12: GRE/IP encapsulated 192.168.12.1->192.168.12.2 (linktype=79, len=72)
*Mar  1 00:41:39.947: Tunnel12 count tx, adding 24 encap bytes
*Mar  1 00:41:39.951: Tunnel12: GRE/IP to classify 192.168.12.2->192.168.12.1 (len=120 type=0x86DD ttl=254 tos=0x0)
*Mar  1 00:41:39.955: Tunnel12: GRE/IP encapsulated 192.168.12.1->192.168.12.2 (linktype=79, len=72)

BNorth#
*Mar  1 00:41:49.927: ICMPv6: Received ICMPv6 packet from FE80::C001:14FF:FED1:0, type 135
*Mar  1 00:41:49.947: ICMPv6: Received ICMPv6 packet from FE80::C001:14FF:FED1:0, type 136
BNorth#

Frame Relay connectivity


Frame Relay is was one of the main topics of the CCIE lab exam until v5.0 before being replaced by MPLS technology. But Some Enterprises still using it and it is nice to have some understanding about some related concepts like point-to-point and multipoint interfaces, sub-interfaces, enabled/disabled inverse ARP… etc.

In this post two main topologies will be treated: the first with point-to-point interfaces and sub-interfaces and the second with multipoint interfaces and sub-interfaces. For each case inverse ARP will be enabled and disabled.

  1. Point-to-point

I-a) No Inverse ARP

  • Interface
  • Sub-interface
  • Connectivity check

I-b) Inverse ARP

  • Interface
  • Sub-interface
  1. Point-to-multipoint

II-a) No Inverse ARP

  • Interface
  • Sub-interface

II-b) Inverse ARP

  • Interface
  • Sub-interface

First let’s start by a very brief recall of “LMI” and “inverse ARP”:

LMI (Local Management Interface): Manage local access link between the FR router and service provider switch, it maintains the status between the two devices.

The router sends status enquiry message each 10 seconds and the FR switch responds with a status message (keepalive) with the sixth message carrying information about PVC and DLCI routed to the interface of the router.

Also LMI trigger the router to send inverse ARP message (router IP over the VC).

Inverse ARP: allow a FR router to react to a received LMI message “PVC up” and announce its IP address to the other end of the PVC, this is particularly useful when the IP address of the other end of the PVC is not known or when a FR router interface/sub-interface ends more than one PVC.

I-Point-to-point


I-a) NO InverseARP

Interface

– When using a “physical interface” to end a point-to-point PVC with a “sub-interface” in the other side, a static mapping is needed to map the local DLCI to the next-hop ip.

SpokeA:

interface Serial0/0
ip address 172.16.0.18 255.255.255.240

encapsulation frame-relay

ip ospf network point-to-point

frame-relay map ip 172.16.0.17 110 broadcast

frame-relay interface-dlci 110

no frame-relay inverse-arp

SpokeB:

interface Serial0/0
ip address 172.16.0.34 255.255.255.240

encapsulation frame-relay

ip ospf network point-to-point

frame-relay map ip 172.16.0.33 201 broadcast

frame-relay interface-dlci 201

no frame-relay inverse-arp

Sub-interface

– Only interface local DLCI.

– No need for static mapping to the other side, because it is a point-to-point “sub-interface” and there is only one DLCI in the other side.

HUB:

interface Serial0/0
no ip address

encapsulation frame-relay

no frame-relay inverse-arp

!

interface Serial0/0.101 point-to-point

ip address 172.16.0.17 255.255.255.240

ip ospf network point-to-point

frame-relay interface-dlci 101

!

interface Serial0/0.102 point-to-point

ip address 172.16.0.33 255.255.255.240

ip ospf network point-to-point

frame-relay interface-dlci 102

!

interface Serial0/0.103 point-to-point

ip address 172.16.0.49 255.255.255.240

ip ospf network point-to-point

frame-relay interface-dlci 103

SpokeC:

interface Serial0/0
no ip address

encapsulation frame-relay

no frame-relay inverse-arp

!

interface Serial0/0.301 point-to-point

ip address 172.16.0.50 255.255.255.240

ip ospf network point-to-point

frame-relay interface-dlci 301

!! it doesn’t matter whether inverse ARP is configured or not

!!no frame-relay inverse-arp

Connectivity check

HUB:

HUB#ping 172.16.0.18
Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 172.16.0.18, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 20/55/96 ms

HUB#ping 172.16.0.34

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 172.16.0.34, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 52/70/112 ms

HUB#ping 172.16.0.50

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 172.16.0.50, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 32/84/168 ms

HUB#

I-b) InverseARP

Interface

– When using a “physical interface” to end a point-to-point PVC, with a “sub-interface” in the other side, and inverse ARP enabled, there is no need for a static mapping.

Sub-interface

– Only interface local DLCI is configured.

– No need for static mapping to the other side, because it is a point-to-point “sub-interface” and there is only one DLCI in the other side.

II) Point-to-multipoint


NO InverseARP

Interface

– With inverse ARP disabled you have to set static mapping of the local DLCI (PVC) to next hop IP addresses because the interface ends more than one PVC.

SpokeB:

interface Serial0/0
ip address 172.16.0.34 255.255.255.240

encapsulation frame-relay

ip ospf network point-to-multipoint

frame-relay map ip 172.16.0.33 201 broadcast

frame-relay map ip 172.16.0.35 203 broadcast

no frame-relay inverse-arp

Sub-interface

– As with physical interfaces, in sub-interfaces you need to set static mapping of interface local DLCI to remote IP because the sub-interface ends more than one PVC.

HUB:

interface Serial0/0
no ip address

encapsulation frame-relay

no frame-relay inverse-arp

!

interface Serial0/0.102 multipoint

ip address 172.16.0.33 255.255.255.240

ip ospf network point-to-multipoint

frame-relay map ip 172.16.0.34 102 broadcast

frame-relay map ip 172.16.0.35 103 broadcast

SpokeC:

interface Serial0/0
no ip address

encapsulation frame-relay

no frame-relay inverse-arp

!

interface Serial0/0.300 multipoint

ip address 172.16.0.35 255.255.255.240

ip ospf network point-to-multipoint

frame-relay map ip 172.16.0.33 301 broadcast

frame-relay map ip 172.16.0.34 302 broadcast

Connectivity check

HUB:

HUB#sh frame map
Serial0/0.102 (up): ip 172.16.0.34 dlci 102(0x66,0x1860), static,

broadcast,

CISCO, status defined, active

Serial0/0.102 (up): ip 172.16.0.35 dlci 103(0x67,0x1870), static,

broadcast,

CISCO, status defined, active

Serial0/0.101 (up): point-to-point dlci, dlci 101(0x65,0x1850), broadcast

status defined, active

HUB#

HUB#sh ip route

172.16.0.0/16 is variably subnetted, 4 subnets, 2 masks

C       172.16.0.32/28 is directly connected, Serial0/0.102

O       172.16.0.34/32 [110/64] via 172.16.0.34, 00:19:49, Serial0/0.102

O       172.16.0.35/32 [110/64] via 172.16.0.35, 00:19:49, Serial0/0.102

C       172.16.0.16/28 is directly connected, Serial0/0.101

10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks

O IA    10.10.0.1/32 [110/65] via 172.16.0.18, 00:19:49, Serial0/0.101

C       10.0.1.0/24 is directly connected, Loopback0

O IA    10.30.0.1/32 [110/65] via 172.16.0.35, 00:19:49, Serial0/0.102

O IA    10.20.0.1/32 [110/65] via 172.16.0.34, 00:19:49, Serial0/0.102

HUB#

HUB#ping 172.16.0.34
Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 172.16.0.34, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 28/84/132 ms

HUB#ping 172.16.0.35

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 172.16.0.35, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 24/110/168 ms

HUB#ping

Protocol [ip]:

Target IP address: 10.20.0.1

Repeat count [5]:

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Source address or interface: 10.0.1.1

Type of service [0]:

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.20.0.1, timeout is 2 seconds:

Packet sent with a source address of 10.0.1.1

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 60/72/96 ms

HUB#

InverseARP

Inteface

Whether it is a sub-interface or a physical interface with inverse ARP enabled with point-to-multipoint, it is nor required to map statically local DLCI’s to next-hop’s:

SpokeB:

interface Serial0/0
ip address 172.16.0.34 255.255.255.240

encapsulation frame-relay

ip ospf network point-to-multipoint

ip ospf priority 0

serial restart-delay 0

no dce-terminal-timing-enable

frame-relay interface-dlci 201

frame-relay interface-dlci 203

Sub-interface

– Inverse ARP will discover what DLCI to use to reach a particular adjacent IP address, for that LMI triggers the router to send inverse ARP messages.

– IT is recommended to disable inverse ARP in the CCIE lab exam, otherwise routers will be connected not according to the lab exam. In general pay a particular attention to default configuration and parameters.

HUB:

interface Serial0/0
no ip address

encapsulation frame-relay

!

interface Serial0/0.102 multipoint

ip address 172.16.0.33 255.255.255.240

ip ospf network point-to-multipoint

frame-relay interface-dlci 102

frame-relay interface-dlci 103

SpokeC:

interface Serial0/0
no ip address

encapsulation frame-relay

!

interface Serial0/0.300 multipoint

ip address 172.16.0.35 255.255.255.240

ip ospf network point-to-multipoint

frame-relay interface-dlci 301

frame-relay interface-dlci 302

Connectivity check

HUB:

HUB(config-subif)#do sh frame map
Serial0/0.102 (up): ip 172.16.0.34 dlci 102(0x66,0x1860), dynamic,

broadcast,

CISCO, status defined, active

Serial0/0.102 (up): ip 172.16.0.35 dlci 103(0x67,0x1870), dynamic,

broadcast,

CISCO, status defined, active

Serial0/0.101 (up): point-to-point dlci, dlci 101(0x65,0x1850), broadcast

status defined, active

HUB(config-subif)#

HUB(config-subif)#do ping
Protocol [ip]:

Target IP address: 10.20.0.1

Repeat count [5]:

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Source address or interface: 10.0.1.1

Type of service [0]:

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 10.20.0.1, timeout is 2 seconds:

Packet sent with a source address of 10.0.1.1

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 44/80/156 ms

HUB(config-subif)#

SpokeB:

SpokeB(config-if)#do sh frame map
Serial0/0 (up): ip 172.16.0.33 dlci 201(0xC9,0x3090), dynamic,

broadcast,

CISCO, status defined, active

Serial0/0 (up): ip 172.16.0.35 dlci 203(0xCB,0x30B0), dynamic,

broadcast,, status defined, active

SpokeB(config-if)#

SpokeC:

SpokeC(config-subif)#do sh frame map
Serial0/0.300 (up): ip 172.16.0.33 dlci 301(0x12D,0x48D0), dynamic,

broadcast,

CISCO, status defined, active

Serial0/0.300 (up): ip 172.16.0.34 dlci 302(0x12E,0x48E0), dynamic,

broadcast,

CISCO, status defined, active

SpokeC(config-subif)#

Debugging LMI:

HUB#

*Mar 1 08:55:48.173: Serial0/0(out): StEnq, myseq 152, yourseen 20, DTE up

*Mar 1 08:55:48.173: datagramstart = 0x7B6D434, datagramsize = 13

*Mar 1 08:55:48.177: FR encap = 0xFCF10309

*Mar 1 08:55:48.177: 00 75 01 01 01 03 02 98 14

*Mar 1 08:55:48.201:

*Mar 1 08:55:48.205: Serial0/0(in): Status, myseq 152, pak size 13

*Mar 1 08:55:48.205: RT IE 1, length 1, type 1

*Mar 1 08:55:48.209: KA IE 3, length 2, yourseq 21, myseq 152

*Mar 1 08:55:58.173: Serial0/0(out): StEnq, myseq 153, yourseen 21, DTE up

*Mar 1 08:55:58.173: datagramstart = 0x7B6D2F4, datagramsize = 13

*Mar 1 08:55:58.177: FR encap = 0xFCF10309

*Mar 1 08:55:58.177: 00 75 01 01 01 03 02 99 15

*Mar 1 08:55:58.185:

*Mar 1 08:55:58.213: Serial0/0(in): Status, myseq 153, pak size 13

*Mar 1 08:55:58.217: RT IE 1, length 1, type 1

*Mar 1 08:55:58.217: KA IE 3, length 2, yourseq 22, myseq 153

*Mar 1 08:56:08.173: Serial0/0(out): StEnq, myseq 154, yourseen 22, DTE up

*Mar 1 08:56:08.177: datagramstart = 0x7B6CDF4, datagramsize = 13

*Mar 1 08:56:08.177: FR encap = 0xFCF10309

*Mar 1 08:56:08.177: 00 75 01 01 01 03 02 9A 16

*Mar 1 08:56:08.185:

*Mar 1 08:56:08.221: Serial0/0(in): Status, myseq 154, pak size 37

*Mar 1 08:56:08.221: RT IE 1, length 1, type 0

*Mar 1 08:56:08.225: KA IE 3, length 2, yourseq 23, myseq 154

*Mar 1 08:56:08.225: PVC IE 0x7 , length 0x6 , dlci 101, status 0x2 , bw 0

*Mar 1 08:56:08.229: PVC IE 0x7 , length 0x6 , dlci 102, status 0x0 , bw 0

*Mar 1 08:56:08.229: PVC IE 0x7 , length 0x6 , dlci 103, status 0x2 , bw 0

*Mar 1 08:56:18.173: Serial0/0(out): StEnq, myseq 155, yourseen 23, DTE up

*Mar 1 08:56:18.173: datagramstart = 0x7B6D6B4, datagramsize = 13

*Mar 1 08:56:18.177: FR encap = 0xFCF10309

*Mar 1 08:56:18.177: 00 75 01 01 01 03 02 9B 17

*Mar 1 08:56:18.185:

%d bloggers like this: