Ipv6 QoS
June 20, 2008 Leave a comment
If you already grasp QoS concepts for IPv4, IPv6 QoS is a piece of cake!
As with IPv4, IPv6 uses MQC (Modular QoS CLI)to configure Diffserv (Differentiated services) QoS.
IPv6 QoS is very similar to IPv4 QoS except for some details:
- NBAR, the first version, doesn’t support IPv6.
- cRTP (Compressed-RTP).
- No way to match directly RTP.
- CAR (Committed Access Rate) replaced by CB- Policing already in IPv4 and no need to keep supporting it in IPv6.
- PQ/CQ replaced by MQC (Modular QoS CLI).
- IPv6 supports only named ACL.
- Layer2 (802.1q) commands works only with CEF- Switched ports not with process- switched nor router originated traffic.
The following is the topology used to deploy IPv6 QoS, no IPv4 addressing scheme. The serial link between the two routers is the bottleneck of the network where QoS is needed.
Figure1: Topology
Classification & Marking
The first and the most crucial step in deploying QoS is classification of traffic.
In this step you need to:
- identify various applications and protocols running on your network.
- understand the application behavior with respect to the available network resources.
- identify the mission critical and non-critical application.
- Categorize the applications and protocols in different classes of service accordingly.
The classification is based on packet native classifiers like:
- source/destination IPv6 addresses, IP protocol and source/destination ports.
- precedence and dscp.
- source/destination MAC.
- TCP/IP header parameters (packet length…).
- IPv6 specific classifiers (not currently used).
- IPv6 (traffic class) used in the same way as IPv4 (ToS).
IPv4 can take advantage of NBAR, very useful to automatically recognize applications and provides statistics about bandwidth utilization. Without NBAR, you will need to manually determine which classifiers define the application you want QoS to handle. Unfortunately NBAR doesn’t support IPv6, NBAR2 does.
For IPv6 traffic, you can use other tools such “Netflow” or any traffic analyzer software for more granular inspection, then build IPv6 ACLs matching the relevant classifiers with the relevant values.
table1: Application classification and marking
ACL name |
Permit/ Deny |
Protocol |
Source |
Destination |
||||
IP |
mask |
Src port |
IP |
Mask |
Dst port |
|||
FTP |
permit |
tcp |
2001:b:b:b::b |
– |
ftp (21) |
any |
– |
– |
permit |
tcp |
2001:b:b:b::b |
– |
ftp-data (20) |
any |
– |
– |
|
UStream |
permit |
udp |
any |
– |
– |
any |
– |
1234 |
Table1 summarizes the applications used in the lab for demonstration purpose.
Table2: Application classification and marking
Application |
Bandwidth allocated |
Flow direction |
traffic classifiers |
Class |
Markers |
unicast streaming |
700 kbps |
From HostB to HostA |
dest IPv6=2001:a:a:a::a |
MatchUStream |
dscp=ef |
protocol I =Pv6 |
|||||
dest port 1234 |
|||||
FTP download |
30 kbps |
From HostB to HostA |
src port 21 (control) |
MatchFTP |
dscp=af41 |
protocol I =Pv6 |
|||||
src port 20 (data) |
|||||
scavenger appli video streaming |
150 kbps |
From HostB to Host A |
src port … |
Generally dscp “ef” is reserved for VoIP which requires the most stringent QoS, in this lab we use the dscp marking just to check at the destination host (hostA) whether the classification works.
The end-to-end model used to test IPv6 QoS is depicted in Figure2.
Figure2: End-to-End QoS model
Congestion Management & avoidance
For the purpose of the lab, the unicast streaming application is given the highest priority and it is supposed to have stringent bandwidth, latency, delay and jitter requirements, LLQ (Low Latency Queuing) is the most appropriate queuing mechanism for such applications.
The FTP traffic is considered critical with a minimum of 30kbps of bandwidth guaranteed .
Any other traffic, default-class is considered “scavenger” and will have no privilege during congestion.
Each application is being allocated the needed bandwidth to perform correctly.
Table3: Classes and bandwidth allocation
Class | Bandwidth reserved | Queue | DSCP | Priority |
MatchUStream | 700 kbps | LLQ | af41 | High |
MatchFTP | 30 kbps | CBWFQ | af21 | Medium |
class-default | no guarantee | WFQ | 0 | Low |
policy-map QoS_Policy class MatchUStream set dscp ef priority 700 class MatchFTP set dscp af41 bandwidth 30 class class-default fair-queue set dscp default |
Figure3 and 4 show a summary of general QoS mechanisms and queuing system types.
Figure3: Software and Hardware queuing systems
Figure4: QoS mechanisms
RouterB:
ipv6 access-list FTP permit tcp host 2001:B:B:B::B eq ftp any permit tcp host 2001:B:B:B::B eq ftp-data any ! ipv6 access-list UStream sequence 20 permit udp any any eq 1234 ! class-map match-all MatchFTP match protocol ipv6 match access-group name FTP class-map match-all MatchUStream match protocol ipv6 match access-group name UStream |
Monitoring:
RouterB(config-pmap-c)#do show policy-map int s1/0 Serial1/0 Service-policy output: QoS_Policy Class-map: MatchUStream (match-all) 23625 packets, 32602500 bytes 30 second offered rate 538000 bps, drop rate 0 bps Match: protocol ipv6 Match: access-group name UStream QoS Set dscp ef Packets marked 23624 Queueing Strict Priority Output Queue: Conversation 264 Bandwidth 700 (kbps) Burst 17500 (Bytes) (pkts matched/bytes matched) 1455/2007900 (total drops/bytes drops) 1/1380 Class-map: MatchFTP (match-all) 5886 packets, 8192512 bytes 30 second offered rate 135000 bps, drop rate 0 bps Match: protocol ipv6 Match: access-group name FTP QoS Set dscp af41 Packets marked 5929 Queueing Output Queue: Conversation 265 Bandwidth 30 (kbps) Max Threshold 64 (packets) (pkts matched/bytes matched) 3486/4784640 (depth/total drops/no-buffer drops) 0/0/0 Class-map: class-default (match-any) 105 packets, 8292 bytes 30 second offered rate 0 bps, drop rate 0 bps Match: any Queueing Flow Based Fair Queueing Maximum Number of Hashed Queues 256 (total queued/total drops/no-buffer drops) 0/0/0 QoS Set dscp default Packets marked 50 RouterB(config-pmap-c)# |
RouterB(config-pmap-c)#do sh int s1/0 Serial1/0 is up, line protocol is up Hardware is M4T MTU 1500 bytes, BW 1024 Kbit, DLY 20000 usec, reliability 255/255, txload 165/255, rxload 1/255 Encapsulation HDLC, crc 16, loopback not set Keepalive set (10 sec) Restart-Delay is 0 secs Last input 00:00:07, output 00:00:00, output hang never Last clearing of “show interface” counters 00:08:49 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 21 Queueing strategy: weighted fair Output queue: 0/1000/64/1 (size/max total/threshold/drops) Conversations 0/3/256 (active/max active/max total) Reserved Conversations 1/1 (allocated/max allocated) Available Bandwidth 38 kilobits/sec 30 second input rate 4000 bits/sec, 8 packets/sec 30 second output rate 666000 bits/sec, 59 packets/sec 4066 packets input, 261676 bytes, 0 no buffer Received 61 broadcasts, 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort 32082 packets output, 44217146 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 output buffer failures, 0 output buffers swapped out 0 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up RouterB(config-pmap-c)# |
Shaping & Policing
Shaping and policing work exactly as in IPv4.
Policing:
- Applied inbound and outbound.
- Drops non conforming traffic.
- More efficient in term of memory utilization.
- Drops packets more often, therefore more TCP retransmission.
Shaping:
- Applied only outbound.
- Queues excess traffic.
- Less efficient because of additional queuing, but less dropping (only when congestion occurs).
- Causes variable delay (jitter) and increases buffer utilization, therefore more delays.
Figure5 shows the mechanism of token bucket used in shaping and policing.
Figure5: shaping and policing
Here is how the FTP traffic diagram looks like before any shaping or policing:
Figure6: FTP before shaping and policing
The following figurex shows different behaviors based on three configurations of shaping and traffic:
Figure7: FTP with shaping and traffic
The first part of the graph corresponds to FTP traffic with just a configured guaranteed bandwidth of 30kbps.
policy-map QoS_Policy class MatchFTP bandwidth 30 |
In the second part of the graph, a high limit is set for FTP class using policing at 100kbps, you can note that this results in a frequent TCP global synchronization, a TCP protocol behavior when congestion occurs somewhere in the path to the destination, as long as the congestion exists the source continues to receive requests to decrease the sending rate back from zero and so on, hence the form of the graph (repeated short bursts from the bottom to the maximum).
policy-map QoS_Policy class MatchFTP bandwidth 30 police 100000 |
The third part of the graph represents the result of using shaping instead of policing, more optimal use of the bandwidth. Instead of dropping the TCP traffic and causing global synchronization, the exceed packets are queued for a certain amount of time and then sent, hence the higher used average bandwidth.
policy-map QoS_Policy class MatchFTP bandwidth 30 shape average 100000 |
RouterB#sh policy-map int s1/0 … Class-map: MatchFTP (match-all) 32045 packets, 47038153 bytes 30 second offered rate 99000 bps, drop rate 0 bps Match: protocol ipv6 Match: access-group name FTP QoS Set dscp af41 Packets marked 32074 Queueing Output Queue: Conversation 265 Bandwidth 30 (kbps) Max Threshold 64 (packets) (pkts matched/bytes matched) 17595/26364666 (depth/total drops/no-buffer drops) 0/0/0 Traffic Shaping Target/Average Byte Sustain Excess Interval Increment Rate Limit bits/int bits/int (ms) (bytes) 100000/100000 2000 8000 8000 80 1000 Adapt Queue Packets Bytes Packets Bytes Shaping Active Depth Delayed Delayed Active – 11 17134 25746208 17096 25689056 yes … |
Conclusion
Make sure you understand first IPv4 QoS, especially the difference between shaping and policing and their impact on your own applications.