QoS and IPSec interactions


QoS Differentiated services efficiency depends on the consistency and the coherence of QoS policy deployed on a per-hop basis (PHB) along the traffic path.

Some services like IPSec encryption or tunnelling can cause issues to QoS. The purpose of this article is to clarify these interactions.

 

Outline

  • Overview.
  • Conclusions.
  • Examples of deployment (Lab1,Lab2)

 

Overview

Interactions between QoS and IPSec are based on three principles:

  • Order of operations
  • QoS criteria
  • QoS policy location

 

  1. Order of operations: By default IOS performs tunnelling and VPN operations first and then apply QoS policy.

     

    Figure1: default order of operation

    With QoS pre-classification the previous order is inversed: QoS policy is performed first and then tunnelling and VPN processes.

     

    Figure2: QoS pre-classification

    Well, Technically the QoS operation is still performed after IPSec, but using original header fields preserved in a “temporary memory Structure”.

     

  2. QoS criteria:
    What your QoS policy is looking for?

    With GRE tunnelling or IPSec encryption, a new header is built and only ToS field is copied by default from the original to the new tunnel or IPSec header (tunnel mode). So, caution if your classification criteria are based on other fields than ToS/DSCP!

     

    Figure3: TOS/DSCP preservation

     

  3. QoS policy location
    : QoS traffic classification is based on inspection of IP header fields like addresses, PID, ports, ToS …

    In fact, what is visible to QoS process depends on where your QoS policy is placed:

  • On the tunnel interface, before header modification (tunnelling and VPN operations).
  • On the physical interface, after header modification (tunnelling and VPN operations).

I hope the following illustrations will provide extra perception how QoS and IPSec are related.

 

Figure 4: ONLY QoS policy applied to physical interface (header visible)

 

Figure 5: IPSec + QoS policy applied to physical interface (only ToS preserved)

 

Figure 6: IPSec + QoS pre-classification (original header visible)

 

 

 

 

Figure 7: IPSec + QoS policy applied to tunnel interface (original IP header visible)

 

Table 1 summarises all combinations of the previously mentioned cases:

 

Table 1: summary

cases 1 2 3 4 5 6 7 8
QoS policy applied to physical int. X X X X
tunnel int. X X X X
Order of operations Default behaviour X X X x
QoS pre-classification X X X X
QoS policy criteria ONLY ToS X X X x
Other field than ToS (IP, ports) X x X x
Results QoS succeed QoS succeed QoS succeed QoS succeed QoS succeed QoS succeed QoS fails QoS succeed

Conclusions:

 

QoS pre-classification is needed when:

• Classification is based on packet IP header information (src/dst IP, PID, ports nbr., flags…)

&

• Service policy is applied to the physical interface (def. order of processes)

 

 

QoS pre-classification is NOT needed when:

  • Classification is based ONLY on ToS criterion.

Or

  • QoS Service policy is applied to tunnel interface (before performing VPN)

 

Lab 1 : IPSec applied to the physical interface

1-a)

  • Default QoS order of operations (IPSec -> QoS)
  • QoS is based on both DSCP and IP criteria

Figure5: IPSec encryption

R3:

crypto isakmp policy 101

encr 3des

authentication pre-share

group 2

crypto isakmp key cisco address 172.16.12.3

!

!

crypto ipsec transform-set MYTRANSFORMSET esp-3des esp-sha-hmac

!

crypto ipsec profile MYIPSECPROFILE

set transform-set MYTRANSFORMSET

!

!

crypto map MYCRYPTOMAP 10 ipsec-isakmp

set peer 172.16.12.3

set transform-set MYTRANSFORMSET

match address IPSECACL

!

!

class-map match-all MYNOTIPMAP

match not access-group name IPSECACL

match ip dscp af11

class-map match-all MYTOS5MAP

match access-group name IPSECACL

match ip dscp af11

class-map match-all MYNOTTOS5MAP

match access-group name IPSECACL

match not ip dscp af11

!

!

policy-map MYQOSPOLICY

class MYTOS5MAP

bandwidth 100

class MYNOTTOS5MAP

drop

class MYNOTIPMAP

drop

class class-default

!

interface FastEthernet0/1

ip address 172.16.12.4 255.255.255.0

crypto map MYCRYPTOMAP

service-policy output MYQOSPOLICY

!

ip access-list extended IPSECACL

permit icmp host 192.168.2.7 host 192.168.1.6

 

IPSec traffic (new IP Sec ESP header) is captured by the class “MYNOTIPMAP” and drop policy applied

 

R3#sh policy-map int fa0/1

FastEthernet0/1

 

Service-policy output: MYQOSPOLICY

 

Class-map: MYTOS5MAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: ip dscp af11 (10)

Queueing

Output Queue: Conversation 265

Bandwidth 100 (kbps)Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 48/0/0

 

Class-map: MYNOTTOS5MAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: not ip dscp af11 (10)

drop

 

Class-map: MYNOTIPMAP (match-all)


66 packets, 10956 bytes

5 minute offered rate 0 bps, drop rate 55000 bps

Match: not access-group name IPSECACL

Match: ip dscp af11 (10)


drop

 

Class-map: class-default (match-any)

15 packets, 1520 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: any

R3#

R3 :

R3#sh cry isa sa

IPv4 Crypto ISAKMP SA

dst src state conn-id slot status

172.16.12.4 172.16.12.3 QM_IDLE 1001 0 ACTIVE

 

IPv6 Crypto ISAKMP SA

 

R3#

 

ICMP traffic is generated from R6 toward R7 with DSCP=af11

 

R6#ping

Protocol [ip]:

Target IP address: 192.168.2.7

Repeat count [5]: 100000

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Source address or interface: 40

% Invalid source

Source address or interface:

Type of service [0]: 40

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 100000, 100-byte ICMP Echos to 192.168.2.7, timeout is 2 seconds:

…………………………………………………………….

  • RESULT = NOK

1-b) Apply QoS pre-classification (QoS -> IPSec)

R3:

crypto map MYCRYPTOMAP 10 ipsec-isakmp

set peer 172.16.12.4

set transform-set MYTRANSFORMSET

match address IPSECACL

qos pre-classify

!

QoS is performed 1st (class “MYTOS5MAP” is triggered), and then IPSec is performed.

R3#sh policy-map int fa0/1

FastEthernet0/1

 

Service-policy output: MYQOSPOLICY

 

Class-map: MYTOS5MAP (match-all)

1257 packets, 143298 bytes

5 minute offered rate 6000 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: ip dscp af11 (10)

Queueing

Output Queue: Conversation 265

Bandwidth 100 (kbps)Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0

 

Class-map: MYNOTTOS5MAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: not ip dscp af11 (10)

drop

 

Class-map: MYNOTIPMAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: not access-group name IPSECACL

Match: ip dscp af11 (10)

drop

 

Class-map: class-default (match-any)

31 packets, 4737 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: any

R3#

From R6 source of the traffic:

R6#ping

Protocol [ip]:

Target IP address: 192.168.2.7

Repeat count [5]: 1000000

Datagram size [100]:

Timeout in seconds [2]:

Extended commands [n]: y

Source address or interface:

Type of service [0]: 40

Set DF bit in IP header? [no]:

Validate reply data? [no]:

Data pattern [0xABCD]:

Loose, Strict, Record, Timestamp, Verbose[none]:

Sweep range of sizes [n]:

Type escape sequence to abort.

Sending 1000000, 100-byte ICMP Echos to 192.168.2.7, timeout is 2 seconds:

…………………………U.U.U.U.U………!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  • RESULT = OK

Lab2 : IPSec applied to the GRE tunnel

  • 2-a):
    • Default QoS order of operations (IPSec -> QoS)
    • QoS is based on both DSCP and IP criteria

Figure5: IPSec GRE tunnel encryptions

 

 

interface Tunnel0

crypto map MYCRYPTOMAP

!

interface FastEthernet0/1

service-policy output MYQOSPOLICY

R3#sh policy-map int fa0/1

FastEthernet0/1

 

Service-policy output: MYQOSPOLICY

 

Class-map: MYTOS5MAP (match-all)


0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: ip dscp af11 (10)

Queueing

Output Queue: Conversation 265

Bandwidth 100 (kbps)Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0

 

Class-map: MYNOTTOS5MAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: not ip dscp af11 (10)

drop

 


Class-map: MYNOTIPMAP (match-all)

129 packets, 24510 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: not access-group name IPSECACL

Match: ip dscp af11 (10)

drop

 

Class-map: class-default (match-any)

30 packets, 3060 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: any

R3#

  • RESULT = NOK

2-b) Apply QoS pre-classification (QoS -> IPSec)

 

R3#

interface Tunnel0

crypto map MYCRYPTOMAP

!

interface FastEthernet0/1

service-policy output MYQOSPOLICY

!

crypto map MYCRYPTOMAP 10 ipsec-isakmp

qos pre-classify

R3#sh policy-map int fa0/1

FastEthernet0/1

 

Service-policy output: MYQOSPOLICY

 

Class-map: MYTOS5MAP (match-all)

1689 packets, 320910 bytes

5 minute offered rate 15000 bps, drop rate 0 bps

Match: access-group name IPSECACL


Match: ip dscp af11 (10)

Queueing

Output Queue: Conversation 265

Bandwidth 100 (kbps)Max Threshold 64 (packets)

(pkts matched/bytes matched) 0/0

(depth/total drops/no-buffer drops) 0/0/0

 

Class-map: MYNOTTOS5MAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPSECACL

Match: not ip dscp af11 (10)

drop

 

Class-map: MYNOTIPMAP (match-all)

0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: not access-group name IPSECACL

Match: ip dscp af11 (10)

drop

 

Class-map: class-default (match-any)

2 packets, 120 bytes

5 minute offered rate 0 bps, drop rate 0 bps

Match: any

R3#

  • RESULT = OK
Advertisements

MQC – Frame Relay policing and marking


In a previous post we have seen how to configure the Frame Relay cloud (service provider FR switch) to strictly enforce the committed contract with the customer by enabling policing on incoming interface and traffic shaping with congestion management on outgoing interface, nevertheless the customer can opt to specify to the service provider which packet should be dropped when a congestion is experienced in the FR cloud, which is the topic of this post.

Figure1: Lab topology

As depicted in figure1, shaping is configured on the Frame Relay switch outgoing interface with congestion management, in which all packets marked with “DE” bit will be discarded in case of s1/0 shaping queue congestion (DE threshold reached).

R2, the sender, will have the responsibility to mark for FRS all packets that it judge eligible for discard.

A traffic generator station connected to R2 will produce two different flows of traffic to the receiver station connected to R1, the first flow, traffic1, is supposed to represent a critical application traffic, the second, traffic2, is considered as junk traffic.

As shown below, in the first section different tests are performed without any policing and marking, then in the second section the same configurations are reproduced but with policing and marking:

1) Without policing and DE marking

1-a) Test1:

Critical application: traffic1 rate 32kbps – destination (192.168.201.10/udp 5001).

1-b) Test2:

Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).

Junk traffic: traffic2 – 32kbps – destination (192.168.201.10/udp 5003).

1-c) Test3:

Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).

Junk traffic: traffic2 – 64kbps – destination (192.168.201.10/udp 5003).

2) With policing and DE marking

2-a) Test4:

Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).

Junk traffic: traffic2 – 32kbps – destination (192.168.201.10/udp 5003).

2-b) Test5:

Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).

Junk traffic: traffic2 – 64kbps – destination (192.168.201.10/udp 5003).

 

1) Without policing and DE marking

1-a) Test1:

  • Critical application: traffic1 rate 32kbps – destination (192.168.201.10/udp 5001).
  • No policing at the sender R2.

FRS:

FRS_QOS#sh frame pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 77 output pkts 289 in bytes 26225

out bytes 379518 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 0

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 32000 bits/sec, 2 packets/sec

switched pkts 77

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 0 pkt above DE 0 policing drop 0

pvc create time 01:34:11, last time pvc status changed 00:11:43


Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 292 bytes 384024 pkts delayed 2 bytes delayed 936


shaping inactive

traffic shaping drops 0

Queueing strategy: fifo


Output queue 0/40, 0 drop, 6 dequeued

FRS_QOS#

The FRS output queue is not congested, so very little queuing and no drops

Figure2 : Traffic1 at the destination station

Note the jitter for traffic1 is almost 0 (good performance)

1-b) Test2:

  • Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).
  • Junk traffic: traffic2 – 32kbps – destination (192.168.201.10/udp 5003).
  • No policing at the sender R2.

R2:

R2(config-fr-dlci)#do sh frame pvc 102

 

PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 102, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.102

 

input pkts 120 output pkts 3754 in bytes 36944

out bytes 5535345 dropped pkts 0 in FECN pkts 0

in BECN pkts 0 out FECN pkts 0 out BECN pkts 0

in DE pkts 0 out DE pkts 0

out bcast pkts 36 out bcast bytes 11889

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 64000 bits/sec, 5 packets/sec

pvc create time 00:33:26, last time pvc status changed 00:31:56

R2(config-fr-dlci)#

 

R2(config-fr-dlci)#do sh int s0/0

Serial0/0 is up, line protocol is up

Hardware is PowerQUICC Serial

MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,

reliability 255/255, txload 10/255, rxload 1/255

Encapsulation FRAME-RELAY, loopback not set

Keepalive set (10 sec)

LMI enq sent 207, LMI stat recvd 207, LMI upd recvd 0, DTE LMI up

LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0

LMI DLCI 0 LMI type is ANSI Annex D frame relay DTE

Broadcast queue 0/64, broadcasts sent/dropped 37/0, interface broadcasts 0

Last input 00:00:04, output 00:00:00, output hang never

Last clearing of “show interface” counters 00:34:34

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: fifo

Output queue :0/40 (size/max)

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 64000 bits/sec, 5 packets/sec

328 packets input, 40351 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

1 input errors, 0 CRC, 1 frame, 0 overrun, 0 ignored, 0 abort

4340 packets output, 6106331 bytes, 0 underruns

0 output errors, 0 collisions, 1 interface resets

0 output buffer failures, 0 output buffers swapped out

2 carrier transitions

DCD=up DSR=up DTR=up RTS=up CTS=up

 

R2(config-fr-dlci)#

64kbps represents the aggregated bandwidth of the two applications: traffic1 and traffic2.

FRS:

FRS_QOS#sh frame pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 2 output pkts 506 in bytes 658

out bytes 757672 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 0

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec

30 second output rate 64000 bits/sec, 5 packets/sec

switched pkts 2

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 0 pkt above DE 0 policing drop 0

pvc create time 02:11:44, last time pvc status changed 00:49:17

Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 498 bytes 745656 pkts delayed 498 bytes delayed 745656

shaping active

traffic shaping drops 0

Queueing strategy: fifo

Output queue 23/40, 0 drop, 502 dequeued

FRS_QOS#

FRS output queue is congested but no dropping because DE threshold not reached.

Figure3 : Traffic1 at the destination station

Traffic1 received at the destination station at the same rate sending rate of 32kbps without losses but with a bigger delays and jitter (Figure3).

1-c) Test3:

  • Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).
  • Junk traffic: traffic2 – 64kbps – destination (192.168.201.10/udp 5003).
  • No policing at the sender R2.

R2:

R2(config-fr-dlci)#do sh int s0/0

Serial0/0 is up, line protocol is up

Hardware is PowerQUICC Serial

MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,

reliability 255/255, txload 16/255, rxload 1/255

Encapsulation FRAME-RELAY, loopback not set

Keepalive set (10 sec)

LMI enq sent 255, LMI stat recvd 255, LMI upd recvd 0, DTE LMI up

LMI enq recvd 0, LMI stat sent 0, LMI upd sent 0

LMI DLCI 0 LMI type is ANSI Annex D frame relay DTE

Broadcast queue 0/64, broadcasts sent/dropped 45/0, interface broadcasts 0

Last input 00:00:03, output 00:00:00, output hang never

Last clearing of “show interface” counters 00:42:34

Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: fifo

Output queue :0/40 (size/max)

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 97000 bits/sec, 8 packets/sec

406 packets input, 47591 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

1 input errors, 0 CRC, 1 frame, 0 overrun, 0 ignored, 0 abort

7079 packets output, 10124289 bytes, 0 underruns

0 output errors, 0 collisions, 1 interface resets

0 output buffer failures, 0 output buffers swapped out

2 carrier transitions

DCD=up DSR=up DTR=up RTS=up CTS=up

 

R2(config-fr-dlci)#

Figure4: effect of junk traffic (64kbps) traffic on critical traffic

As you can note from the figure4, the critical traffic is experiencing bad performance with packets drops and high jitter.

FRS:

FRS_QOS#sh frame pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 2 output pkts 575 in bytes 658

out bytes 863650 dropped pkts 311 in pkts dropped 0

out pkts dropped 311 out bytes dropped 464782

late-dropped out pkts 311 late-dropped out bytes 464782

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 0

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec

30 second output rate 63000 bits/sec, 5 packets/sec

switched pkts 2

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 311 pkt above DE 0 policing drop 0

pvc create time 02:07:04, last time pvc status changed 00:44:35

Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 581 bytes 872662 pkts delayed 581 bytes delayed 872662

shaping active

traffic shaping drops 0

Queueing strategy: fifo

Output queue 39/40, 318 drop, 587 dequeued

FRS_QOS# 

FRS output queue is heavily congested and DE threshold is reached, therefore packets marked with “DE” bit are dropped.

2) With policing and DE marking

2-a) Test4:

  • Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).
  • Junk traffic: traffic2 – 32kbps – destination (192.168.201.10/udp 5003).
  • Policing configured on R2.

There is two ways of configuring MQC Frame Relay shaping:

The first is to use traditional MQC with “match class” and match “fr-dlci” in a class map to specify which traffic will be policed (figure 5).

The second is to make sure the policed traffic fall in the “class-default” class map and apply the policy directly to the PVC (figure 6).

In our example we use the second one.

Figure 5: Apply policy to the main interface

Figure 6: Apply policy directly to a specific PVC

Parameters 

FR End-device, sender 

FR switch 

FR End-device, receiver 

Shaping (outbound)

Policing (outbound)

Policing (inbound) 

Shaping (outbound) holdq=40

Shaping (inbound) 

CIR 

64000

16000 

64kbps 

Congestion management 

 

DE 

70% 

Bc=CIR*Tc 

64000

2000 

8000 

 

Tc 

125ms

125ms 

125ms 

 

Be 

0

0 

0 

 

 

R2 (sender) configuration:

ip access-list extended TRAFFIC1

permit udp host 192.168.102.10 host 192.168.201.10 eq 5001

This ACL match the critical application traffic.

class-map match-all TRAFFIC1_cmap

match access-group name TRAFFIC1

!

policy-map POLICY_R2

class TRAFFIC1_cmap

class class-default

police cir 16000 bc 2000

conform-action transmit

exceed-action set-frde-transmit

An empty class “TRAFFIC1_map” for the critical traffic is included in the policy-map to separate all that is not traffic1 into “class-default” class (traffic2 included) so we can apply a two color policing (figure7) that transmit conformed traffic and mark all that exceed with “DE” bit.

map-class frame-relay FR_mapc_R2

frame-relay traffic-rate 64000 64000

service-policy output POLICY_R2

The overall traffic through the PVC is shaped at 64kbps and a subset of that traffic is policed at 16kbps (through the policy applied in the output direction).

interface Serial0/0.102 point-to-point


frame-relay interface-dlci 102


class FR_mapc_R2

Then the map-class is applied directly to the PVC 102

Figure7: 2-color policing


FRS:

FRS_QOS#sh frame pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 18 output pkts 1064 in bytes 3921

out bytes 1584660 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 86

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 63000 bits/sec, 5 packets/sec

switched pkts 18

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 0 pkt above DE 0 policing drop 0

pvc create time 01:39:33, last time pvc status changed 01:38:13


Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 1064 bytes 1584660 pkts delayed 563 bytes delayed 836566


shaping active

traffic shaping drops 0

Queueing strategy: fifo


Output queue 7/40, 0 drop, 570 dequeued

FRS_QOS#

Note that output packets are marked with “DE” bit but no dropping because the threshold of 70% (28 out of 40) is not reached, so both traffic1 and traffic2 are received by R1 at the sender rate as shown in figure 8 and figure 9.

Figure8: traffic1 at the receiver station

Figure9: traffic2 at the receiver station

R1:

R1#sh frame pvc 201

 

PVC Statistics for interface Serial0/0 (Frame Relay DTE)

 

DLCI = 201, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = Serial0/0.201

 

input pkts 22963 output pkts 3143 in bytes 34172308

out bytes 598859 dropped pkts 0 in pkts dropped 0

out pkts dropped 0 out bytes dropped 0

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 2774 out DE pkts 0

out bcast pkts 122 out bcast bytes 40129

5 minute input rate 57000 bits/sec, 2 packets/sec

5 minute output rate 4000 bits/sec, 2 packets/sec

pvc create time 02:01:04, last time pvc status changed 02:01:04

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 3144 bytes 599039 pkts delayed 24 bytes delayed 8574

shaping inactive

traffic shaping drops 0

Queueing strategy: fifo

Output queue 0/40, 0 drop, 24 dequeued

R1#

The above output show that traffic can be marked as DE along the path but still can reach the destination because there was no congestion in the FR cloud.

2-b) Test5:

  • Critical application: traffic1 – 32kbps – destination (192.168.201.10/udp 5001).
  • Junk traffic: traffic2 – 64kbps – destination (192.168.201.10/udp 5003).
  • Policing configured on R2.

FRS:

FRS_QOS#sh frame-relay pvc 201

 

PVC Statistics for interface Serial1/0 (Frame Relay DCE)

 

DLCI = 201, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = Serial1/0

 

input pkts 4 output pkts 547 in bytes 2184

out bytes 817932 dropped pkts 111 in pkts dropped 0

out pkts dropped 111 out bytes dropped 166722

late-dropped out pkts 111 late-dropped out bytes 166722

in FECN pkts 0 in BECN pkts 0 out FECN pkts 0

out BECN pkts 0 in DE pkts 0 out DE pkts 420

out bcast pkts 0 out bcast bytes 0

30 second input rate 0 bits/sec, 0 packets/sec


30 second output rate 62000 bits/sec, 5 packets/sec

switched pkts 4

Detailed packet drop counters:

no out intf 0 out intf down 0 no out PVC 0

in PVC down 0 out PVC down 0 pkt too big 0

shaping Q full 111 pkt above DE 0 policing drop 0

pvc create time 01:46:49, last time pvc status changed 01:45:28


Congestion DE threshold 28

cir 64000 bc 8000 be 0 byte limit 1000 interval 125

mincir 32000 byte increment 1000 Adaptive Shaping none

pkts 553 bytes 826944 pkts delayed 553 bytes delayed 826944


shaping active

traffic shaping drops 0

Queueing strategy: fifo


Output queue 28/40, 111 drop, 553 dequeued

FRS_QOS#

Traffic2 higher that the police rate of 16kbps is marked as “DE” at R2 and because of traffic2 high rate of 64kbps, FRS output queue is congested and reach the “DE” threshold, therefore FRS start dropping traffic2 marked with “DE”.

According to figure10 and figure11 traffic 1 is still received at the sender rate but not traffic 2 because it is marked with “DE” therefore dropped by the FRS

Figure10: traffic1 at the receiver station

Figure11: policed traffic2 at the receiver station

CONCLUSION:

Discard Eligible marking and dropping mechanism certainly resolves the problem of application sensitivity to dropping but not the delay and jitter sensitivity caused by queuing, for this, a separated queuing mechanism is needed for delay/jitter sensitive applications.

Ipv6 QoS


If you already grasp QoS concepts for IPv4, IPv6 QoS is a piece of cake!

As with IPv4, IPv6 uses MQC (Modular QoS CLI)to configure Diffserv (Differentiated services) QoS.

IPv6 QoS is very similar to IPv4 QoS except for some details:

  • NBAR, the first version, doesn’t support IPv6.
  • cRTP (Compressed-RTP).
  • No way to match directly RTP.
  • CAR (Committed Access Rate) replaced by CB- Policing already in IPv4 and no need to keep supporting it in IPv6.
  • PQ/CQ replaced by MQC (Modular QoS CLI).
  • IPv6 supports only named ACL.
  • Layer2 (802.1q) commands works only with CEF- Switched ports not with process- switched nor router originated traffic.

The following is the topology used to deploy IPv6 QoS, no IPv4 addressing scheme. The serial link between the two routers is the bottleneck of the network where QoS is needed.

Figure1: Topology


Classification & Marking

The first and the most crucial step in deploying QoS is classification of traffic.

In this step you need to:

  • identify various applications and protocols running on your network.
  • understand the application behavior with respect to the available network resources.
  • identify the mission critical and non-critical application.
  • Categorize the applications and protocols in different classes of service accordingly.

The classification is based on packet native classifiers like:

  • source/destination IPv6 addresses, IP protocol and source/destination ports.
  • precedence and dscp.
  • source/destination MAC.
  • TCP/IP header parameters (packet length…).
  • IPv6 specific classifiers (not currently used).
  • IPv6 (traffic class) used in the same way as IPv4 (ToS).

IPv4 can take advantage of NBAR, very useful to automatically recognize applications and provides statistics about bandwidth utilization. Without NBAR, you will need to manually determine which classifiers define the application you want QoS to handle. Unfortunately NBAR doesn’t support IPv6, NBAR2 does.

For IPv6 traffic, you can use other tools such “Netflow” or any traffic analyzer software for more granular inspection, then build IPv6 ACLs matching the relevant classifiers with the relevant values.

table1: Application classification and marking

ACL name

Permit/

Deny

Protocol

Source

Destination

IP

mask

Src port

IP

Mask

Dst port

FTP

permit

tcp

2001:b:b:b::b

ftp (21)

any

permit

tcp

2001:b:b:b::b

ftp-data (20)

any

UStream

permit

udp

any

any

1234

Table1 summarizes the applications used in the lab for demonstration purpose.

Table2: Application classification and marking

Application

Bandwidth allocated

Flow direction

traffic classifiers

Class

Markers

unicast streaming

700 kbps

From HostB to HostA

dest IPv6=2001:a:a:a::a

MatchUStream

dscp=ef

protocol I =Pv6

dest port 1234

FTP download

30 kbps

From HostB to HostA

src port 21 (control)

MatchFTP

dscp=af41

protocol I =Pv6

src port 20 (data)

scavenger appli

video streaming

150 kbps

From HostB to Host A

src port …

Generally dscp “ef” is reserved for VoIP which requires the most stringent QoS, in this lab we use the dscp marking  just to check at the destination host (hostA) whether the classification works.

The end-to-end model used to test IPv6 QoS is depicted in Figure2.

Figure2: End-to-End QoS model


Congestion Management & avoidance

For the purpose of the lab, the unicast streaming application is given the highest priority and it is supposed to have stringent bandwidth, latency, delay and jitter requirements, LLQ (Low Latency Queuing) is the most appropriate queuing mechanism for such applications.

The FTP traffic is considered critical with a minimum of  30kbps of bandwidth guaranteed .

Any other traffic, default-class is considered “scavenger” and will have no privilege during congestion.

Each application is being allocated the needed bandwidth to perform correctly.

Table3: Classes and bandwidth allocation

Class Bandwidth reserved Queue DSCP Priority
MatchUStream 700 kbps LLQ af41 High
MatchFTP 30 kbps CBWFQ af21 Medium
class-default no guarantee WFQ 0 Low
policy-map QoS_Policy
  class MatchUStream
   set dscp ef

   priority 700

  class MatchFTP

   set dscp af41

   bandwidth 30

  class class-default

   fair-queue

   set dscp default

Figure3 and 4 show a summary of general QoS mechanisms and queuing system types.

Figure3: Software and Hardware queuing systems


Figure4: QoS mechanisms


RouterB:

ipv6 access-list FTP
permit tcp host 2001:B:B:B::B eq ftp any
permit tcp host 2001:B:B:B::B eq ftp-data any

!

ipv6 access-list UStream

sequence 20 permit udp any any eq 1234

!

class-map match-all MatchFTP

  match protocol ipv6

  match access-group name FTP

class-map match-all MatchUStream

  match protocol ipv6

  match access-group name UStream

Monitoring:

RouterB(config-pmap-c)#do show policy-map int s1/0
Serial1/0
  Service-policy output: QoS_Policy

    Class-map: MatchUStream (match-all)

      23625 packets, 32602500 bytes

      30 second offered rate 538000 bps, drop rate 0 bps

      Match: protocol ipv6

      Match: access-group name UStream

      QoS Set

        dscp ef

          Packets marked 23624

      Queueing

        Strict Priority

        Output Queue: Conversation 264

        Bandwidth 700 (kbps) Burst 17500 (Bytes)

        (pkts matched/bytes matched) 1455/2007900

        (total drops/bytes drops) 1/1380

    Class-map: MatchFTP (match-all)

      5886 packets, 8192512 bytes

      30 second offered rate 135000 bps, drop rate 0 bps

      Match: protocol ipv6

      Match: access-group name FTP

      QoS Set

        dscp af41

          Packets marked 5929

      Queueing

        Output Queue: Conversation 265

        Bandwidth 30 (kbps) Max Threshold 64 (packets)

        (pkts matched/bytes matched) 3486/4784640

        (depth/total drops/no-buffer drops) 0/0/0

    Class-map: class-default (match-any)

      105 packets, 8292 bytes

      30 second offered rate 0 bps, drop rate 0 bps

      Match: any

      Queueing

        Flow Based Fair Queueing

        Maximum Number of Hashed Queues 256

        (total queued/total drops/no-buffer drops) 0/0/0

      QoS Set

        dscp default

          Packets marked 50

RouterB(config-pmap-c)#

RouterB(config-pmap-c)#do sh int s1/0
Serial1/0 is up, line protocol is up
  Hardware is M4T

  MTU 1500 bytes, BW 1024 Kbit, DLY 20000 usec,

     reliability 255/255, txload 165/255, rxload 1/255

  Encapsulation HDLC, crc 16, loopback not set

  Keepalive set (10 sec)

  Restart-Delay is 0 secs

  Last input 00:00:07, output 00:00:00, output hang never

  Last clearing of “show interface” counters 00:08:49

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 21

  Queueing strategy: weighted fair

  Output queue: 0/1000/64/1 (size/max total/threshold/drops)

     Conversations  0/3/256 (active/max active/max total)

     Reserved Conversations 1/1 (allocated/max allocated)

     Available Bandwidth 38 kilobits/sec

  30 second input rate 4000 bits/sec, 8 packets/sec

  30 second output rate 666000 bits/sec, 59 packets/sec

     4066 packets input, 261676 bytes, 0 no buffer

     Received 61 broadcasts, 0 runts, 0 giants, 0 throttles

     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

     32082 packets output, 44217146 bytes, 0 underruns

     0 output errors, 0 collisions, 0 interface resets

     0 output buffer failures, 0 output buffers swapped out

     0 carrier transitions     DCD=up  DSR=up  DTR=up  RTS=up  CTS=up

RouterB(config-pmap-c)#

Shaping & Policing

Shaping and policing work exactly as in IPv4.

Policing:

  • Applied inbound and outbound.
  • Drops non conforming traffic.
  • More efficient in term of memory utilization.
  • Drops packets more often, therefore more TCP retransmission.

Shaping:

  • Applied only outbound.
  • Queues excess traffic.
  • Less efficient because of additional queuing, but less dropping  (only when congestion occurs).
  • Causes variable delay (jitter) and increases buffer utilization, therefore more delays.

Figure5 shows the mechanism of token bucket used in shaping and policing.

Figure5: shaping and policing


Here is how the FTP traffic diagram looks like before any shaping or policing:

Figure6: FTP before shaping and policing


The following figurex shows different behaviors based on three configurations of shaping and traffic:

Figure7: FTP with shaping and traffic


The first part of the graph corresponds to FTP traffic with just a configured guaranteed bandwidth of 30kbps.

policy-map QoS_Policy
  class MatchFTP
   bandwidth 30

In the second part of the graph, a high limit is set for FTP class using policing at 100kbps, you can note that this results in a frequent TCP global synchronization, a TCP protocol behavior when congestion occurs somewhere in the path to the destination, as long as the congestion exists the source continues to receive requests to decrease the sending rate back from zero and so on, hence the form of the graph (repeated short bursts from the bottom to the maximum).

policy-map QoS_Policy
  class MatchFTP
   bandwidth 30

   police 100000

The third part of the graph represents the result of using shaping instead of policing, more optimal use of the bandwidth. Instead of dropping the TCP traffic and causing global synchronization, the exceed packets are queued for a certain amount of time and then sent, hence the higher used average bandwidth.

policy-map QoS_Policy
  class MatchFTP
   bandwidth 30

   shape average 100000

RouterB#sh policy-map int s1/0

    Class-map: MatchFTP (match-all)

      32045 packets, 47038153 bytes

      30 second offered rate 99000 bps, drop rate 0 bps

      Match: protocol ipv6

      Match: access-group name FTP

      QoS Set

        dscp af41

          Packets marked 32074

      Queueing

        Output Queue: Conversation 265

        Bandwidth 30 (kbps) Max Threshold 64 (packets)

        (pkts matched/bytes matched) 17595/26364666

        (depth/total drops/no-buffer drops) 0/0/0

      Traffic Shaping

           Target/Average   Byte   Sustain   Excess    Interval  Increment

             Rate           Limit  bits/int  bits/int  (ms)      (bytes)

           100000/100000    2000   8000      8000      80        1000

        Adapt  Queue     Packets   Bytes     Packets   Bytes     Shaping

        Active Depth                         Delayed   Delayed   Active

        –      11        17134     25746208  17096     25689056  yes

Conclusion

Make sure you understand first IPv4 QoS, especially the difference between shaping and policing and their impact on your own applications.

 

%d bloggers like this: