Deploying F5 BIG-IP LTM VE within GNS3 (part-1)


One of the advantages of deploying VMware (or VirtualBox) machines inside GNS3, is the available rich networking infrastructure environment. No need to hassle yourself about interface types, vmnet or private? Shared or ad-hoc?

In GNS3 it is as simple and intuitive as choosing  a node interface and connect it to whatever other node interface.

In this lab, we are testing basic F5 BIG-IP LTM VE deployment within GNS3. The only Virtual machine used in this lab is F5 BIG-IP all other devices are docker containers: 

  • Nginx Docker containers for internal web servers.
  • ab(Apache Benchmark) docker container for the client used for performence testing.
  • gns3/webterm containers used as Firefox browser for client testing and F5 web management.

 

Outline:

  1. Docker image import
  2. F5 Big-IP VE installation and activation
  3. Building the topology
  4. Setting F5 Big-IP interfaces
  5. Connectivity check between devices
  6. Load balancing configuration
  7. Generating client http queries
  8. Monitoring Load balancing

Devices used:

Environment:

  • Debian host GNU/Linux 8.5 (jessie)
  • GNS3 version 1.5.2 on Linux (64-bit)

System requirements:

  • F5 Big IP VE requires 2GB of RAM (recommended >= 8GB)
  • VT-x / AMD-V support

The only virtual machine used in the lab is F5 Big-IP, all other devices are Docker containers.

 

1.Docker image import

Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers  and then “New”.

Choose “New image” option and type the Docker image in the format provided in “devices used” section <account/<repo>>, then choose a name (without the slash “/”).

Note:

By default GNS3 derives a name in the same format as the Docker registry (<account>/repository) which can cause an error in some earlier versions. In the latest GNS3 versions, the slash “/” is removed from the derived name.


Installing 
gns3/openvswitch:

selection_293
selection_278

Set the number of interfaces to eight and accept default parameters with “next” until “finish”.

– Repeat the same procedure for gns3/webterm

selection_372

Choose a name for the image (without slash “/”)

selection_340

Choose vnc as the console type to allow Firefox (GUI) browsing

selection_373

And keep the remaining default parameters.

– Repeat the same procedure for the image ajnouri/nginx.

Create a new image with name ajnouri/nginx

selection_339

Name it as you like

selection_340

And keep the remaining default parameters.

2. F5 Big-IP VE installation and activation

 

– From F5 site, import the file BIG-11.3.0.0-scsi.ova. Go to https://www.f5.com/trial/big-ip-ltm-virtual-edition.php

selection_341

You’ll have to request a registration key for the trial version that you’ll receive by email.

selection_342

Open the ova file in VMWare workstation.

– To register the trial version, bridge the first interface to your network connected to Internet.

selection_343

– Start the VM and log in to the console with root/default

selection_344

– Type “config” to access the text user interface.

selection_345– Choose “No” to configure a static IP address, a mask and a default gateway in the same subnet of the bridged network. Or “Yes” if you have a dhcp server and want to get a dynamic IP.

– Check the interface IP.

selection_346

– Ping an Internet host ex: gns3.com to verify the connectivity and name resolution.

– Browse Big-IP interface IP and accept the certificate.

selection_347

Use admin/admin for login credentials

selection_348

Put the key you received by email in the Base Registration Key field and push “Next”, wait a couple of seconds for activation and you are good to go.

selection_349
Now you can shutdown F5 VM.

 

3. Building the topology

 

Importing F5 VE Virtual machine to GNS3

From GNS3 “preference” import a new VMWare virtual machine

selection_350

Choose BIG-IP VM we have just installed and activated

selection_351

Make sure to set minimum 3 adapters and allow GNS3 to use any of the VM interfaces

selection_352

Topology

Now we can build our topology using BIG-IP VM and the Docker images installed.

Below, is an example of topology in which we will import F5 VM and put some containers..

Internal:

– 3 nginx containers

– 1 Openvswitch

Management:

– GUI browser webterm container

– 1 Openvswitch

– Cloud mapped to host interface tap0

External:

– Apache Benchmark (ab) container

– GUI browser webterm container

selection_353

Notice the BIG-IP VM interface e0, the one priorly bridged to host network, is now connected to a browser container for management.

I attached the host interface “tap0” to the management switch because, for some reasons, without it, arp doesn’t work on that segment.

 

Address configuration:

– Assign each of the nginx container an IP in subnet of your choice (ex: 192.168.10.0/24)

selection_354

In the same manner:

192.168.10.2/24 for ajnouri/nginx-2

192.168.10.3/24 for ajnouri/nginx-3

 

On all three nginx containers, start nginx and php servers:

service php5-fpm start

service nginx start

 

– Assign an IP to the management browser in the same subnet as BIG-IP management IP

192.168.0.222/24 for gns3/webterm-2

– Assign addresses default gateway and dns server to ab container and webterm-1 containers

selection_355

selection_356

And make sure both client devices resolve  ajnouri.local host to BIG-IP address 192.168.20.254

echo "192.168.20.254   ajnouri.local" >> /etc/hosts

– Openvswitch containers don’t need to be configured, it acts like a single vlan.

– Start the topology

 

4. Setting F5 Big-IP interfaces

 

To manage the load balancer from the webterm-2, open the console to the container, this will open a Firefox from the container .

selection_357

Browse the VM management IP https://192.168.0.151 and exception for the certificate and log in with F5 BigIP default credentials admin/admin.

selection_358

Go through the initial configuration steps

– You will have to set the hostname (ex: ajnouri.local), change the root and admin account passwords

selection_359

You will be logged out to take into account password changes, log in back

– For the purpose of this lab, not redundancy not high availability
selection_360

– Now you will have to configure internal (real servers) and external (client side) vlans and associated interfaces and self IPs.

(Self IPs are the equivalent of VLAN interface IP in Cisco switching)

Internal VLAN (connected to web servers):

selection_361

External VLAN (facing clients):

selection_362

5. Connectivity check between devices

 

Now make sure you have successful connectivity from each container to the corresponding Big-IP interface.

Ex: from ab container

selection_363

Ex: from nginx-1 server container

selection_364

selection_365

The interface connected to your host network will get ip parameters (address, gw and dns) from your dhcp server.

 

6. Load balancing configuration

 

Back to the browser webterm-2

For BIG-IP to load balance http requests from client to the servers, we need to configure:

  • Virtual Server: single entity (virtual server) visible to client0
  • Pool : associated to the Virtual server and contains the list of real web servers to balance between
  • Algorithm used to load balance between members of the pool

– Create a pool of web servers “Pool0” with “RoundRobin” as algorithm and http as protocol to monitor the members.

selection_366

-Associate to the virtual server “VServer0” to the pool “Pool0”

selection_367

Check the network map to see if everything is configured correctly and monitoring shows everything OK (green)

selection_368

From client container webterm-1, you can start a firefox browser (console to the container) and test the server name “ajnouri/local”

selection_369

If everything is ok, you’ll see the php page showing the real server ip used, the client ip and the dns name used by the client.

Everytime you refresh the page, you’ll see a different server IP used.

 

7. Performance testing

 

with Apache Benchmark container ajnouri/ab, we can generate client request to the load balancer virtual server by its hostname (ajnouri.local).

Let’s open an aux console to the container ajnouri/ab and generate 50.000 connections with 200 concurrent ones to the url ajnouri.local/test.php

ab -n 50000 -c 200 ajnouri.local/test.php

selection_370

 

8. Monitoring load balancing

 

Monitoring the load balancer performance shows a peek of connections corresponding to Apache benchmark generated requests

selection_371


In the upcoming part-2, the 3 web server containers are replaced with a single container in which we can spawn as many servers as we want (Docker-in-Docker) as well as test a custom python client script container that generates http traffic from spoofed IP addresses as opposed to a container  (Apache Benchmark) that generate traffic from a single source IP.

Advertisements

IOS server load balancing with mininet server farm


The idea is to play with IOS load balancing mechanism using large number of “real” servers (50 servers), and observe the difference in behavior between different load balancing algorithms.

Due to resource scarcity in the lab environment, I use mininet to emulate “real” servers.

I will stick to the general definition for load balancing:

A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications.

The publically announced IP address of the server is called Virtual IP (VIP). Behind the scene, the server services are provided not by a single server but a cluster of servers,real” servers with their real IP’s (RIP) hidden from outside world.

The Load Balancer, IOS SLB in our case, distributes user connections, sent to the VIP, to the real servers according to the load balancing algorithm.

Figure1: Generic load balancing

Figure1: Generic load balancing

 

Figure2: High-Level-Design Network topology

Figure2: High-Level-Design Network topology

 

Load balancing algorithms:

The tested load balancing algorithms are:

  • Weighted Round Robin (with equal weights for all real servers): New connections to the virtual SRV are directed to real servers equally in a circular fashion (Default weight = 8 for all servers).
  • Weighted Round Robin (with inequal weights): New connection to the virtual SRV are directed to real servers proportionally to their weights.
  • Weighted Least Connections: New connection to the virtual SRV are directed to real servers with the fewest number of active connections

 

Session redirection modes:

Dispatched NAT Virtual IP configured on ALL real servers as  loopback or secondary.Real servers are layer2-adjacent to SLB.SLB redirect traffic to real servers at MAC layer.
Directed VIP can be unknown to real servers.NO FTP/FW support.Support server NAT for ESP/GRE virtual servers.Use NAT to translate VIP => RIP.
Server NAT VIP translated to RIP and vice-versa.Real servers not required to be directly connected.
Client NAT Used for multiple SLBs.Replace client IP with one of the SLB IP to guarantee handling the returning traffic by the same SLB.
Static NAT Use static NAT for traffic from real server responding to clients Real servers (ex: in the same ethernet) use their own IP

The lab deploys Directed session redirection with server NAT.

 

IOS SLB configuration:

The configuration of load balancing in Cisco is IOS is pretty straightforward

Server Farm(Required) ip slb serverfarm <serverfarm-name>
Load-Balancing Algorithm (Optional) predictor [roundrobin | leastconns]
Real Server (Required) real <ip-address>
Enabling the Real Server for Service (Required) inservice
Virtual Server(Required) ip slb vserver virtserver-name
Associating a Virtual Server with a Server Farm (Required) serverfarm serverfarm-name
Virtual Server Attributes (Required)Specifies the virtual server IP address, type of connection, port number, and optional service coupling. virtual ip-address {tcp | udp} port-number [service service-name]
Enabling the Virtual Server for Service (Required) inservice

 

GNS3 lab topology

The lab is running on GNS3 with mininet VM and the host generating client traffic.

Figure3: GNS3 topology

Figure3: GNS3 topology

 

Building mininet VM server farm

mininet VM preparation:

  • Bridge and attach guest mininet VM interface to the SLB device.
  • Bring up the VM interface, without configuring any IP address.

Routing:

Because I am generating user traffic from the host machine, I need to configure static routing pointing to GNS3 subnets and the VIP:

&lt;/pre&gt;
&lt;pre&gt;sudo ip a a 192.168.10.121/24 dev tap2
sudo ip a a 192.168.20.0/24 via 192.168.10.201
sudo ip a a 66.66.66.66/32 via 192.168.10.201

mininet python API script:

The script builds mininet machines and set their default gateways to GNS3 IOS SLB device IP and start UDP server on port 5555 using netcat utility.

&lt;/pre&gt;
&lt;pre&gt;ip route add default via 10.0.0.254
nc -lu 5555 &amp;

Here is the python mininet API script:

https://github.com/AJNOURI/Software-Defined-Networking/blob/master/Mininet-Scripts/mininet-dc.py

#!/usr/bin/python

import re
from mininet.net import Mininet
from mininet.node import Controller
from mininet.cli import CLI
from mininet.link import Intf
from mininet.log import setLogLevel, info, error
from mininet.util import quietRun

def checkIntf( intf ):
&quot;Make sure intf exists and is not configured.&quot;
if ( ' %s:' % intf ) not in quietRun( 'ip link show' ):
error( 'Error:', intf, 'does not exist!\n' )
exit( 1 )
ips = re.findall( r'\d+\.\d+\.\d+\.\d+', quietRun( 'ifconfig ' + intf ) )
if ips:
error( 'Error:', intf, 'has an IP address and is probably in use!\n' )
exit( 1 )

def myNetwork():

net = Mininet( topo=None, build=False)

info( '*** Adding controller\n' )
net.addController(name='c0')

info( '*** Add switches\n')
s1 = net.addSwitch('s1')

max_hosts = 50
newIntf = 'eth1'

host_list = {}

info( '*** Add hosts\n')
for i in xrange(1,max_hosts+1):
host_list[i] = net.addHost('h'+str(i))
info( '*** Add links between ',host_list[i],' and s1 \r')
net.addLink(host_list[i], s1)

info( '*** Checking the interface ', newIntf, '\n' )
checkIntf( newIntf )

switch = net.switches[ 0 ]
info( '*** Adding', newIntf, 'to switch', switch.name, '\n' )
brintf = Intf( newIntf, node=switch )

info( '*** Starting network\n')
net.start()

for i in xrange(1,max_hosts+1):
info( '*** setting default gateway &amp; udp server on ', host_list[i], '\r' )
host_list[i].cmd('ip r a default via 10.0.0.254')
host_list[i].cmd('nc -lu 5555 &amp;')

CLI(net)
net.stop()

if __name__ == '__main__':
setLogLevel( 'info' )
myNetwork()

 

 

UDP traffic generation using scapy

I used scapy to emulate client connections from random IP addresses

Sticky connections:

Sticky connections are connections from the same client IP address or subnet and for a given period of time should be assigned to the same previous real server.

The sticky objects created to track client assignments are kept in the database for a period of time defined by sticky timer.

If both conditions are met : 

  • A connection for the same client already exists.
  • the amount of time between the end of a previous connection from the client and the start of the new connection is within the timer duration.

The server assigns the client connection to the same real server.

Router(config-slb-vserver)# sticky duration [group group-id]

A FIFO queue is used to emulate sticky connections. The process is triggered randomly.

If the queue is not full, the ramdomly generated source IP addresses is pushed to the queue, otherwise, an IP is pulled from the queue to be used, a second time, as source of the generated packet.

Figure4: Random Genetation of  sticky connections

Figure4: Random Genetation of sticky connections

 

https://github.com/AJNOURI/traffic-generator/blob/master/gen_udp_sticky.py

&lt;/pre&gt;
&lt;pre&gt;#! /usr/bin/env python

import random
from scapy.all import *
import time
import Queue

# (2014) AJ NOURI ajn.bin@gmail.com

dsthost = '66.66.66.66'

q = Queue.Queue(maxsize=5)

for i in xrange(1000):
rint = random.randint(1,10)
if rint % 5 == 0:
print '==&gt; Random queue processing'
if not q.full():
ipin = &quot;.&quot;.join(map(str, (random.randint(0, 255) for _ in range(4))))
q.put(ipin)
srchost = ipin
print ipin,' into the queue'
else:
ipout = q.get()
srchost = ipout
print ' *** This is sticky src IP',ipout
else:
srchost = &quot;.&quot;.join(map(str, (random.randint(0, 255) for _ in range(4))))
print 'one time src IP', srchost
#srchost = scapy.RandIP()
p = IP(src=srchost,dst=dsthost) / UDP(dport=5555)
print 'src= ',srchost, 'dst= ',dsthost
send(p, iface='tap2')
print 'sending packet\n'
time.sleep(1)

 

Randomly, the generated source IP used for the packet and in the same time pushed to the queue if it is not yet full:

one time src IP 48.235.35.122
src=  48.235.35.122 dst=  66.66.66.66
.
Sent 1 packets. 

one time src IP 48.235.35.122
src=  48.235.35.122 dst=  66.66.66.66
.
Sent 1 packets.
...

==&gt; Random queue processing
40.147.224.72  into the queue
src=  40.147.224.72 dst=  66.66.66.66
.
Sent 1 packets.

otherwise, an IP (previously generated) is pulled out from the queue and reused as source IP.

==&gt; Random queue processing
 *** This is sticky src IP 88.27.24.177
src=  88.27.24.177 dst=  66.66.66.66
.
Sent 1 packets.

Building Mininet server farm

ajn@ubuntu:~$ sudo python mininet-dc.py
[sudo] password for ajn:
Sorry, try again.
[sudo] password for ajn:
*** Adding controller
*** Add switches
*** Add hosts
*** Checking the interface eth1 1
*** Adding eth1 to switch s1
*** Starting network
*** Configuring hosts
h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16 h17 h18 h19 h20 h21
 h22 h23 h24 h25 h26 h27 h28 h29 h30 h31 h32 h33 h34 h35 h36 h37 h38 h39
 h40 h41 h42 h43 h44 h45 h46 h47 h48 h49 h50
*** Starting controller
*** Starting 1 switches
s1
*** Starting CLI:lt gateway &amp; udp server on h50
mininet&gt;

 

Weighted Round Robin (with equal weights):

 

IOS router configuration
ip slb serverfarm MININETFARM
 nat server
 real 10.0.0.1
 inservice
 real 10.0.0.2
 inservice
 real 10.0.0.3
 inservice
…
 real 10.0.0.50
 inservice
!
ip slb vserver VSRVNAME
 virtual 66.66.66.66 udp 5555
 serverfarm MININETFARM
 sticky 5
 idle 300
 inservice

 

Starting traffic generator
ajn:~/coding/python/scapy$ sudo python udpqueue.py
one time src IP 142.124.66.30
src= 142.124.66.30 dst= 66.66.66.66
.
Sent 1 packets.

sending packet
one time src IP 11.125.212.0
src= 11.125.212.0 dst= 66.66.66.66
.
Sent 1 packets.

sending packet
one time src IP 148.97.164.124
src= 148.97.164.124 dst= 66.66.66.66
.
Sent 1 packets.

sending packet
one time src IP 101.234.155.254
src= 101.234.155.254 dst= 66.66.66.66
.
Sent 1 packets.

sending packet
==&gt; Random queue processing
78.19.5.190 into the queue
src= 78.19.5.190 dst= 66.66.66.66
.
Sent 1 packets.

...

The router has already started associating incoming UDP connections to real server according to the LB algorithm.

Router IOS SLB
SLB#sh ip slb stick 

client netmask group real conns
-----------------------------------------------------------------------
43.149.57.102 255.255.255.255 4097 10.0.0.3 1
78.159.83.228 255.255.255.255 4097 10.0.0.3 1
160.130.143.14 255.255.255.255 4097 10.0.0.3 1
188.26.251.226 255.255.255.255 4097 10.0.0.3 1
166.43.203.95 255.255.255.255 4097 10.0.0.3 1
201.49.188.108 255.255.255.255 4097 10.0.0.3 1
230.46.94.201 255.255.255.255 4097 10.0.0.4 1
122.139.198.227 255.255.255.255 4097 10.0.0.3 1
219.210.19.107 255.255.255.255 4097 10.0.0.4 1
155.53.69.23 255.255.255.255 4097 10.0.0.3 1
196.166.41.76 255.255.255.255 4097 10.0.0.4 1
…
Result: (accelerated video)

Weighted Round Robin (with unequal weights):

Let’s suppose we need to assign a weight of 16, twice the default weight, to each 5th server: 1, 5, 10, 15…

 

IOS router configuration
ip slb serverfarm MININETFARM
 nat server
 real 10.0.0.1
 weight 16
 inservice
 real 10.0.0.2
 inservice
 real 10.0.0.3
 inservice
 real 10.0.0.4
 inservice
 real 10.0.0.5
 weight 16
…
Result: (accelerated video)

Least connection:

 

IOS router configuration

ip slb serverfarm MININETFARM
 nat server
 predictor leastconns
 real 10.0.0.1
 weight 16
 inservice
 real 10.0.0.2
 inservice
 real 10.0.0.3
…
Result: (accelerated video)

 

Stopping Mininet Server farm
mininet&gt; exit
*** Stopping 1 switches
s1 ..................................................
*** Stopping 50 hosts
h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16 h17 h18 h19 h20
 h21 h22 h23 h24 h25 h26 h27 h28 h29 h30 h31 h32 h33 h34 h35 h36 h37
 h38 h39 h40 h41 h42 h43 h44 h45 h46 h47 h48 h49 h50
*** Stopping 1 controllers
c0
*** Done
ajn@ubuntu:~$

References
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/slb/configuration/15-s/slb-15-s-book.html

IPv4 and IPv6 dual-stack PPPoE


The lab covers a scenario of adding basic IPv6 access to an existing PPPoE (PPP for IPv4).

PPPoE is established between CPE (Client Premise Equipment) the PPPoE client and the PPPoE server also known as BNG (Broadband Network Gateway).

ipv4 and IPv6 dual-stack PPPoe

Figure1: ipv4 and IPv6 dual-stack PPPoe

PPPoE server plays the role of the authenticator (local AAA) as well as the authentication and address pool server (figure1). Obviously, a higher centralized prefix assignment and authentication architecture (using AAA RADIUS) is more scalable for broadband access scenarios (figure2).

For more information about RADIUS attributes for IPv6 access networks, start from rfc6911 (http://www.rfc-editor.org/rfc/rfc6911.txt).

Figure2: PPPoE with RADIUS

Figure2: PPPoE with RADIUS

PPPoE for IPv6 is based on the same PPP model as for PPPoE over IPv4. The main difference in deployment is related to the nature of the routed protocol assignment to CPEs (PPPoE clients).

  • IPv4 in routed mode, each CPE gets its WAN interface IP centrally from the PPPoE server and it’s up to the customer to deploy an rfc1918 prefix to the local LAN through DHCP.
  • PPPoE client gets its WAN interface IPv6 address through SLAAC and a delegated prefix to be used for the LAN segment though DHCPv6.

Animation: PPP encapsulation model

Let’s begin with a quick reminder of a basic configuration of PPPoE for IPv4.

PPPoE for IPv4

pppoe-client WAN address assignment

The main steps of a basic PPPoE configuration are:

  • Create a BBAG (BroadBand Access Group).
  • Tie the BBAG to virtual template interface
  • Assign a loopback interface IP (always UP/UP) to the virtual template.
  • Create and assign the address pool (from which client will get their IPs) to the virtual template interface.
  • Create local user credentials.
  • Set the authentication type (chap)
  • Bind the virtual template interface to a physical interface (incoming interface for dial-in).
  • The virtual template interface will be used as a model to generate instances (virtual access interfaces) for each dial-in session.
Figure3: PPPoE server

Figure3: PPPoE server model

pppoe-server

ip local pool PPPOE_POOL 172.31.156.1 172.31.156.100
!
bba-group pppoe BBAG
virtual-template 1
!
interface Virtual-Template1
ip unnumbered Loopback0
ip mtu 1492
peer default ip address pool PPPOE_POOL
ppp authentication chap callin

!

interface FastEthernet0/0

pppoe enable group BBAG

pppoe-client

interface FastEthernet0/1
pppoe enable group global
pppoe-client dial-pool-number 1
!
interface FastEthernet1/0
ip address 192.168.0.201 255.255.255.0
!
interface Dialer1
mtu 1492
ip address negotiated

encapsulation ppp

dialer pool 1

dialer-group 1

ppp authentication chap callin

ppp chap hostname pppoe-client

ppp chap password 0 cisco

Figure4: PPPoE client model

Figure4: PPPoE client model


As mentioned in the beginning, DHCPv4 is deployed at the CPE device to assign rfc1819 addresses to LAN clients and then translated, generally using PAT (Port Address Translation) with the assigned IPv4 to the WAN interface.

You should have the possibility to configure static NAT or static port-mapping to give public access to internal services.

Address translation

interface Dialer1
ip address negotiated
ip nat outside
!
interface FastEthernet0/0
ip address 192.168.4.1 255.255.255.224
ip nat inside
!
ip nat inside source list NAT_ACL interface Dialer1 overload
!

ip access-list standard NAT_ACL

permit any

pppoe-client LAN IPv4 address assignment

pppoe-client

ip dhcp excluded-address 192.168.4.1
!
ip dhcp pool LAN_POOL
network 192.168.4.0 255.255.255.224
domain-name cciethebeginning.wordpress.com
default-router 192.168.4.1
!
interface FastEthernet0/0
ip address 192.168.4.1 255.255.255.224

PPPoE for IPv6

pppoe-client WAN address assignment

All IPv6 prefixes are planned from the 2001:db8::

Pppoe-server

ipv6 local pool PPPOE_POOL6 2001:DB8:5AB:10::/60 64
!
bba-group pppoe BBAG
virtual-template 1
!
interface Virtual-Template1
ipv6 address FE80::22 link-local
ipv6 enable
ipv6 nd ra lifetime 21600
ipv6 nd ra interval 4 3


peer default ipv6 pool PPPOE_POOL6

ppp authentication chap callin

!

interface FastEthernet0/0

pppoe enable group BBAG

IPCP (IPv4) negotiates the IPv4 address to be assigned to the client, where IPC6CP negotiates only the interface identifier, the prefix information is performed through SLAAC.

pppoe-client

interface FastEthernet0/1
pppoe enable group global
pppoe-client dial-pool-number 1
!
interface Dialer1
mtu 1492
dialer pool 1
dialer-group 1
ipv6 address FE80::10 link-local

ipv6 address autoconfig default

ipv6 enable

ppp authentication chap callin

ppp chap hostname pppoe-client

ppp chap password 0 cisco

The CPE (PPPoE client) is assigned an IPv6 address through SLAAC along with a static default route: ipv6 address autoconfig default

pppoe-client#sh ipv6 interface dialer 1
Dialer1 is up, line protocol is up
IPv6 is enabled, link-local address is FE80::10
No Virtual link-local address(es):

Stateless address autoconfig enabled
Global unicast address(es):

2001:DB8:5AB:10::10, subnet is 2001:DB8:5AB:10::/64 [EUI/CAL/PRE]
valid lifetime 2587443 preferred lifetime 600243

Note from the below traffic capture (figure5) that both IPv6 and IPv4 use the same PPP session (layer2 model) (same session ID=0x0006) because the Link Control Protocol is independent of the network layer.

Figure5: Wireshark capture of common PPP layer2 model

Figure5: Wireshark capture of common PPP layer2 model


pppoe-client LAN IPv6 assignment

The advantage of using DHCPv6 PD (Prefix Delegation is that the PPPoE will automatically add a static route to the assigned prefix, very handy!

pppoe-server

ipv6 dhcp pool CPE_LAN_DP
prefix-delegation 2001:DB8:5AB:2000::/56
00030001CA00075C0008 lifetime infinite infinite
!
interface Virtual-Template1

ipv6 dhcp server CPE_LAN_DP

Now the PPPoE client can use the delegated prefix to assign an IPv6 address (::1) to its own interface (fa0/0) and the remaining for SLAAC advertisement.

No NAT needed for the delegated prefixes to be used publically, so no translation states on the PPPoE server. The prefix is directly accessible from outside.

For more information about the client ID used for DHCPv6 assignment, please refer to the prior post about DHCPv6. https://cciethebeginning.wordpress.com/2012/01/18/ios-dhcpv6-deployment-schemes/

pppoe-client

pppoe-client#sh ipv6 dhcp
This device’s DHCPv6 unique identifier(DUID): 00030001CA00075C0008
pppoe-client#
interface Dialer1

ipv6 dhcp client pd PREFIX_FROM_ISP
!
interface FastEthernet0/0
ipv6 address FE80::2000:1 link-local

ipv6 address PREFIX_FROM_ISP ::1/64
ipv6 enable
pppoe-client#sh ipv6 dhcp interface
Dialer1 is in client mode
Prefix State is OPEN
Renew will be sent in 3d11h
Address State is IDLE
List of known servers:
Reachable via address: FE80::22
DUID: 00030001CA011F780008
Preference: 0
Configuration parameters:

IA PD: IA ID 0x00090001, T1 302400, T2 483840

Prefix: 2001:DB8:5AB:2000::/56

preferred lifetime INFINITY, valid lifetime INFINITY

Information refresh time: 0

Prefix name: PREFIX_FROM_ISP

Prefix Rapid-Commit: disabled

Address Rapid-Commit: disabled

client-LAN

Now the customer LAN is assigned globally available IPv6 from the CPE (PPPoE client).

client-LAN#sh ipv6 interface fa0/0
FastEthernet0/0 is up, line protocol is up
IPv6 is enabled, link-local address is FE80::2000:F
No Virtual link-local address(es):

Stateless address autoconfig enabled
Global unicast address(es):

2001:DB8:5AB:2000::2000:F, subnet is 2001:DB8:5AB:2000::/64 [EUI/CAL/PRE]
client-LAN#sh ipv6 route

S ::/0 [2/0]

via FE80::2000:1, FastEthernet0/0

C 2001:DB8:5AB:2000::/64 [0/0]

via FastEthernet0/0, directly connected

L 2001:DB8:5AB:2000::2000:F/128 [0/0]

via FastEthernet0/0, receive

L FF00::/8 [0/0]

via Null0, receive

client-LAN#

End-to-end dual-stack connectivity check

client-LAN#ping 2001:DB8:5AB:3::100
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 2001:DB8:5AB:3::100, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 20/45/88 ms
client-LAN#trace 2001:DB8:5AB:3::100
Type escape sequence to abort.
Tracing the route to 2001:DB8:5AB:3::100

1 2001:DB8:5AB:2000::1 28 msec 20 msec 12 msec

2 2001:DB8:5AB:2::FF 44 msec 20 msec 32 msec

3 2001:DB8:5AB:3::100 48 msec 20 msec 24 msec

client-LAN#

client-LAN#ping 192.168.3.100
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.3.100, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 52/63/96 ms
client-LAN#trace 192.168.3.100
Type escape sequence to abort.
Tracing the route to 192.168.3.100

1 192.168.4.1 32 msec 44 msec 20 msec

2 192.168.2.1 56 msec 68 msec 80 msec

3 192.168.3.100 72 msec 56 msec 116 msec

client-LAN#

I assigned PREFIX_FROM_ISP as locally significant name for the delegated prefix, no need to match the name on the DHCPv6 server side.

Finally, the offline lab with all the commands needed for more detailed inspection:

 

References

http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/bbdsl/configuration/15-mt/bba-15-mt-book/bba-ppoe-client.html

http://www.cisco.com/en/US/docs/ios-xml/ios/bbdsl/configuration/15-mt/ip6-adsl_external_docbase_0900e4b182dbdf4f_4container_external_docbase_0900e4b182dc25f3.html

http://www.broadband-forum.org/technical/download/TR-187.pdf

https://tools.ietf.org/html/rfc5072

https://tools.ietf.org/html/rfc5072

http://www.bortzmeyer.org/6911.html (french)

http://packetsize.net/cisco-pppoe-ipv4-ipv6-mppe.htm

     

Embedded Packet Capture, let’s go fishing for some packets!


EPC (Embedded Packet Capture) is another useful troubleshooting tool to occasionally capture traffic to be analyzed locally or exported to remote device. Occasionally, in contrast with RITE (Router IP Traffic Export) or SPAN on switches which are meant to have permanent flow of copied traffic directed to a traffic analyzer or IDS (Intrusion Detection System).

The configuration workflow is straightforward, but I would like to make a conceptual graphical analogy to illustrate it.

Let’s imagine traffic flowing through a router interface like the following:

Embedded Packet Capture

1- Capture point:


Specify the protocol to capture, the interface and the direction, this is the Here you indicate which IP protocol you need to capture.

monitor capture point ip cef CAPTURE_POINT fastEthernet 0/0 both
monitor capture point ipv6 cef CAPTURE_POINT fastEthernet 0/0 both

2- Packet buffer:


Memory area where the frames are stored once captured. 

monitor capture buffer CAPTURE_BUFFER

 

Embedded Packet Capture

3- ACL:


If needed you can filter a specific type of traffic, available only for IPv4. 

(config)#access-list 100 permit icmp host 192.168.0.1 host 172.16.1.1#monitor capture buffer CAPTURE_BUFFER filter access-list 100 

 

Except the optional IPv4 ACL, configured at the global configuration mode, everything else is configured at the privileged EXEC mode.

Embedded Packet Capture

4- Associate capture point with capture buffer

monitor capture point associate CAPTURE_POINT CAPTURE_BUFFER

You can associate multiple capture points (on the same or multiple interfaces) to the same buffer.

Embedded Packet Capture

5- Start and stop capture process

monitor capture point start CAPTURE_POINTmonitor capture point stop CAPTURE_POINT

 concept6

If you are familiar with wireshark, it will be easier to remember the steps needed to capture traffic.

Wireshark analogy

wireshark and Embedded Packet Capture

Deployment 1

Two capture points are created to capture IPv4 and IPv6 traffic into separate capture buffers. 

monitor capture point ipv6 cef CAPTURE_POINT6 fa0/0 bothmonitor capture buffer CAPTURE_BUFFER6monitor capture point associate CAPTURE_POINT6 CAPTURE_BUFFER6

!

monitor capture point ip cef CAPTURE_POINT4 fa0/0 both

monitor capture buffer CAPTURE_BUFFER4

monitor capture point associate CAPTURE_POINT4 CAPTURE_BUFFER4

Following is the result on the router

Deployment 2

Two capture points are created to capture IPv4 and IPv6 traffic into single capture buffer. 

monitor capture point ipv6 cef CAPTURE_POINT6 fa0/0 bothmonitor capture point ip cef CAPTURE_POINT4 fa0/0 both!monitor capture buffer CAPTURE_BUFFER46

!

monitor capture point associate CAPTURE_POINT6 CAPTURE_BUFFER46

monitor capture point associate CAPTURE_POINT4 CAPTURE_BUFFER46

 

Following is the result on the router

Exporting

!Example of export to tftpR1#monitor capture buffer CAPTURE_BUFFER46 export ftp://login:password@192.168.0.32/Volume_1/ecp.pcapWriting Volume_1/ecp.pcap

R1#

!Example of export to tftp

R1# monitor capture buffer CAPTURE_BUFFER46 export tftp://192.168.0.145/ecp.pcap

!

R1#

And the file opened in wireshark:

EPC traffic opened with wireshark

wireshark

That’s all folks!

Let’s 6rd!


6rd mechanism belongs to the same family as automatic 6to4, in which IPv6 traffic is encapsulated inside IPv4.

The key difference is that with 6rd, Service Providers use their own 6rd prefix and control the transition of their access-aggregation IPv4-only part of their networks to native IPv6. In the same time, SPs transparently provide IPv6 availability service to their customers.

6rd is generally referred as stateless transition mechanism.

Stateless
In stateless mechanisms an algorithm is used to automatically map between addresses, the scope of mechanism is limited to a local domain in which devices, mapping device (6rd BR) and devices that need mapping (6rd CE), share a common elements of the configuration.
Stateful
On a gateway device, we need to specify a specific address or a range of addresses (not used elsewhere) that will represent another range of addresses.
For example IPv4 NAT on Cisco (NAT44):
ip nat source …
The router relies on the configured statement which address (all bits) to translate to which address (all bits). Which is done independently of devices whose address needs to be translated (inside local/outside global).
For redundancy we need additional configuration to synchronize connection state information between devices. for example SNAT(Stateful NAT failover).

Customer CE routers generate their own IPv6 from the delegated 6rd prefix from BRs (Border relays).


Both CEs and BRs encapsulate IPv6 traffic into IPv4 traffic by automatically reconstructing the header IPv4 addresses from IPv6.


  • Lab topology

top1

For end-to-end testing I am using Ubunu Server version for client host behind CE and Internet host.

Here, is a brief and I hope concise explanation of the main 6rd operations:


6rd configuration

6rd address planning depends on each SP. IPv4 bits must be unique to each CE to show the flexibility of the configuration, I fixed the first 16 bits (10.1) as prefix and the last octets (.1) as suffix and attributed the third octet to CEs.

6rd domain configured parameters:

Tunnel source interface fa0/0
6rd prefix 2001:DEAD::/32
IPv4 prefix length 16
IPv4 bits 8
IPv4 suffix 8
Tunnel source interface IP 10.1.4.1

BR1

ipv6 general-prefix 6RD-PREFIX 6rd Tunnel0
!
interface Tunnel0
ipv6 address 6RD-PREFIX 2001:DEAD::/128 anycast
ipv6 enable

tunnel source FastEthernet0/0
tunnel mode ipv6ip 6rd
tunnel 6rd ipv4 prefix-len 16 suffix-len 8


tunnel 6rd prefix 2001:DEAD::/32

interface FastEthernet0/0

ip address 10.1.4.1 255.255.0.0

We need a couple of static routes to make 6rd work in lab conditions; generally, BR announces client assigned IPv4 to clients to Internet.

  • Default ipv4 static route to outside
  • Static route to SP 6rd prefix pointing to the tunnel
  • Default ipv6 static route to outside
ip route 0.0.0.0 0.0.0.0 192.168.20.100
ipv6 route 2001:DEAD::/32 Tunnel0
ipv6 route ::/0 2001:DB9:5AB::100

CE1

The same 6rd parameters are configured on CE:

  • IPv4 affixes
  • 6rd domain global prefix
  • BR IPv4 address (remote tunnel end point)
interface Tunnel0
ipv6 enable
tunnel source 10.1.1.1
tunnel mode ipv6ip 6rd

tunnel 6rd ipv4 prefix-len 16 suffix-len 8

tunnel 6rd prefix 2001:DEAD::/32

tunnel 6rd br 10.1.4.1
!

interface FastEthernet0/0

ip address 10.1.1.1 255.255.0.0

ipv6 enable

!

interface FastEthernet0/1

ip address 192.168.10.100 255.255.255.0


ipv6 address 6RD-PREFIX ::/64 eui-64 ! * <<<

* Note the CE WAN interface fa0/0 is only enabled for IPv6 to be attributed a link-local address.

Fa0/0 IPv4 address is generally assigned by IPv4 DHCP. If the ISP assigns private addresses, CGN NAT44 is needed at the BR to translate them into global IPv4.

6rd prefix is delegated not to CE fa0/0 WAN interface but CE inside LAN interface fa0/1.

This way the customer LAN can benefit directly from the globally IPv6 address without interrupting IPv6 address continuity and the same prefix can be assigned to client IPv6 network using SLAAC (stateless auto configuration).

A recursive (output interface + next-hop) IPv6 default route points to the BR tunnel interface.

ipv6 route ::/0 Tunnel0 2001:DEAD:400::1

Debugging 6rd tunnel

CE1

CE1#debug tunnel
Tunnel Interface debugging is on
CE1#
Tunnel0: IPv6/IP adjacency fixup, 10.1.1.1->10.1.4.1, tos set to 0x0
Tunnel0: IPv6/IP (PS) to decaps 10.1.4.1->10.1.1.1 (tbl=0, “default”, len=124, ttl=254)
Tunnel0: decapsulated IPv6/IP packet (len 124)

BR1

BR1#debug tunnel
Tunnel Interface debugging is on
BR1#
Tunnel0: IPv6/IP to classify 10.1.1.1->10.1.4.1 (tbl=0,”default” len=124 ttl=254 tos=0x0) ok, oce_rc=0x0
Tunnel0: IPv6/IP adjacency fixup, 10.1.4.1->10.1.1.1, tos set to 0x0
BR1#

As shown by the debug, the end-to-end IPv6 traffic is encapsulated into IPv4 packets between CE and BR.

$iperf -u -t -i1 -V -c 2001:db9:5ab::10 -b 10K
WARNING: delay too large, reducing from 1.2 to 1.0 seconds.
————————————————————
Client connecting to 2001:db9:5ab::100, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 112 KByte (default)
————————————————————
[ 3] local 2001:dead:100:0:a00:27ff:fe0f:20e9 port 39710 connected with 2001:db9:5ab::100 port 5001

[ ID] Interval Transfer Bandwidth

[ 3] 0.0- 1.0 sec 1.44 KBytes 11.8 Kbits/sec

[ 3] 1.0- 2.0 sec 1.44 KBytes 11.8 Kbits/sec

[ 3] 2.0- 3.0 sec 4.00 GBytes 34.4 Gbits/sec

[ 3] 3.0- 4.0 sec 1.44 KBytes 11.8 Kbits/sec

[ 3] 4.0- 5.0 sec 1.44 KBytes 11.8 Kbits/sec

[ 3] 5.0- 6.0 sec 4.00 GBytes 34.4 Gbits/sec

[ 3] 6.0- 7.0 sec 1.44 KBytes 11.8 Kbits/sec

[ 3] 7.0- 8.0 sec 4.00 GBytes 34.4 Gbits/sec

[ 3] 8.0- 9.0 sec 1.44 KBytes 11.8 Kbits/sec

[ 3] 9.0-10.0 sec 4.00 GBytes 34.4 Gbits/sec

[ 3] 0.0-11.0 sec 16.0 GBytes 12.5 Gbits/sec

[ 3] Sent 11 datagrams

read failed: Connection refused

[ 3] WARNING: did not receive ack of last datagram after 4 tries.

Following, is a wireshark traffic capture of the previous iperf testing

6rd-iperf-wireshark

Verification commands

BR1:

BR1#sh tunnel 6rd tunnel 0
Interface Tunnel0:
Tunnel Source: 10.1.4.1
6RD: Operational, V6 Prefix: 2001:DEAD::/32
V4 Prefix, Length: 16, Value: 10.1.0.0
V4 Suffix, Length: 8, Value: 0.0.0.1
General Prefix: 2001:DEAD:400::/40
BR1#
BR1#sh tunnel 6rd destination 2001:dead:100:: tunnel0
Interface: Tunnel0
6RD Prefix: 2001:DEAD:100::
Destination: 10.1.1.1
BR1#
BR1#sh tunnel 6rd prefix 10.1.1.1 tunnel 0
Interface: Tunnel0
Destination: 10.1.1.1
6RD Prefix: 2001:DEAD:100::
BR1#

CE1:

CE1#sh tunnel 6rd tunnel 0
Interface Tunnel0:
Tunnel Source: 10.1.1.1
6RD: Operational, V6 Prefix: 2001:DEAD::/32
V4 Prefix, Length: 16, Value: 10.1.0.0
V4 Suffix, Length: 8, Value: 0.0.0.1
Border Relay address: 10.1.4.1
General Prefix: 2001:DEAD:100::/40

CE1#

CE1#sh tunnel 6rd destination 2001:dead:100:: tunnel0
Interface: Tunnel0
6RD Prefix: 2001:DEAD:100::
Destination: 10.1.1.1
CE1#
CE1#sh tunnel 6rd prefix 10.1.4.1 tunnel 0
Interface: Tunnel0
Destination: 10.1.4.1
6RD Prefix: 2001:DEAD:400::
CE1#

We can use “mtr” command to check the performance of the end-to-end (linux-to-linux) communication.

router@router1:~$ mtr 2001:db9:5ab::100
HOST: router1 Loss% Snt Last Avg Best Wrst StDev
1.|– 2001:dead:100:0:c801:3dff 0.0% 30 27.7 25.2 9.7 34.4 5.9
2.|– 2001:db9:5ab::1 0.0% 30 181.3 126.2 99.1 181.3 19.3
3.|– 2001:db9:5ab::100 0.0% 30 67.3 82.8 67.3 121.6 14.1
router@router1:~$

Customer internal network


6rd and MTU

The default MTU on IOS is 1480 bytes, so the maximum IPv4 packet size encapsulating IPv6 is 1500 bytes.

userver1 end-to-end MTU

router@router1:~$ tracepath6 2001:db9:5ab::100
1?: [LOCALHOST] 0.051ms pmtu 1500
1: 2001:dead:100:0:c801:3dff:fe5c:6 27.130ms
1: 2001:dead:100:0:c801:3dff:fe5c:6 57.536ms
2: 2001:dead:100:0:c801:3dff:fe5c:6 30.005ms pmtu 1480
2: 2001:db9:5ab::1 135.158ms
3: 2001:db9:5ab::100 79.603ms reached

Resume: pmtu 1480 hops 3 back 253

router@router1:~$

Here is an animation explaining 6rd and fragmentation:

MTU recommendations

  • Using a redundant BR, there is no guarantee that traffic will be handled by the same BR, so fragmented packets are lost between BRs è BR anycast IPv4 + IPv4 fragmentation is not recommended.
  • Configure the same IPv4 MTU everywhere within the IPv4 segment and (DF=1) to disable fragmentation.
    • make sure the IPv4 MTU is coordinated with IPv6 MTU (IPv4 MTU < IPv6 MTU + 20 bytes)
  • Enable PMTUD to choose the smallest MTU in the path of CE-to-BR communication.
  • DO NOT Filter ICMP messages “Packet Too Big” and “Destination Unreachable” at routers and end-hosts, they provide inf. about transport issues, worse than traffic black hole is a silent traffic black hole.

Offline Lab

Finally, the offline lab with comprehensive command output during the lab:

WCCPv2 and Squid-cache v3.1, a nice couple.


WCCP protocol can be much more interesting than the two commands needed for the CCIE exam. In this lab we will deploy a basic end-to-end solution using IOS 15.2S and the well known open-source solution Squid v3.1 as the content engine.

WCCP version2 is deployed in the lab.

1-Topology

wccpv2top1

WCCP enables the router to transparently intercept client traffic destined to Internet and redirect it to a local content engine. Client browsers doesn’t point to the content engine as proxy.

Cisco and the content engine communicate through unidirectional point-to-point tunnels (either layer2 or GRE ).

2-WCCPv2 Interception

wccpv2top2

The tunnel interfaces are automatically created in order to process outgoing GRE-encapsulated traffic for WCCP.
Short definitions of some related concepts:

Forward proxy Filter access to Internet and reduces BW related to Internet static resources like regular updates, big file downloads…
Reverse proxy Allows external users (ex: on Internet) to access internal servers. Generally supports security features as well as caching and load balancing.
WCCP Bypass Packets When the content engine cannot manage the redirected packets appropriately, it returns the packets unchanged to the originating router. These packets are called bypass packets.
Closed service (default = open) WCCP discards packets that do not have a WCCP client registered (external devices) to receive the redirected traffic.

Router configuration

The router configuration is straightforward:

ip cef
ip wccp web-cache password 0 cisco
!
interface FastEthernet0/0
ip wccp web-cache redirect in

We are not using ip wccp web-cache redirect out which is used on interfaces facing outside users trying to connect to inside servers (reverse-proxy)

Fa0/0 is the interface facing internal clients trying to connect to Internet.

Of course, you can add other functionalities like more services or filtering packets to be redirected.

Router verification commands

sh ip int fa0/0
sh ip int brief
sh tunnel in Tunnel0
sh tunnel in Tunnel1
sh ip wccp summary
sh ip wccp global counters
sh ip wccp
sh ip wccp web-cache counters
sh tunnel groups wccp
sh adjacency tunnel 0 detail
sh ip wccp web-cache detail

Here is the outcome


Squid config

The configuration is slightly different depending on what Squid and IOS version/platform you are using, so make sure to refer to appropriate configuration guides.

Enabling wccpv2 protocol on squid to work with your router.

wccp2_router 192.168.1.121wccp2_forwarding_method grewccp2_return_method gre

wccp2_service standard 0 password=cisco

http_port 3128 intercept

wccp2_router 192.168.1.121 Designate the router intercepting the traffic
wccp2_forwarding_method gre Router to squid encapsulation
wccp2_return_method gre Squid to router encapsulation
wccp2_service standard 0 password=cisco Standard service defines http traffic interception, with password protection between squid and the router
http_port 3128 intercept Configure Squid 3.1 to transparent interception

To illustrate the concept squid is configured with permissive strategy (last rule permit everything). As with Cisco ACLs, the first matched rule is applied. With restrictive strategy make sure to put permission rules “allow” before the last “deny all”.

The initial squid configuration file looks very intimidating, so create a version free of comments and empty lines using:

grep -ve ^$ -ve ^# /etc/squid3/squid.conf

Restart Squid after each modification of /etc/squid3/squid.conf
acl manager proto cache_objectacl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 192.168.2.0/24
acl alldst dst 0.0.0.0/32
acl SSL_ports port 443acl Safe_ports port 80# httpacl Safe_ports port 21# ftpacl Safe_ports port 443

# httpsacl Safe_ports port 70

# gopheracl Safe_ports port 210

# acl Safe_ports port 1025-65535

# unregistered ports

acl Safe_ports port 280

# http-mgmtacl Safe_ports port 488

# gss-httpacl Safe_ports port 591

# filemakeracl Safe_ports port 777

# multiling http

acl CONNECT method CONNECT

http_access allow manager localhost

http_access deny manager

http_access deny

!Safe_ports

http_access deny CONNECT

!SSL_ports

http_access allow localhost

http_access allow localnet

http_access allow alldst

http_access allow all

#http_access deny all

http_port 3128 intercept

visible_hostname squid31.cciethebeginning.wordpress.com

wccp2_router 192.168.1.121

wccp2_forwarding_method gre

wccp2_return_method gre

wccp2_service standard 0 password=cisco

hierarchy_stoplist cgi-bin ?

coredump_dir /var/spool/squid3

refresh_pattern ^ftp: 1440 20% 10080

refresh_pattern ^gopher: 1440 0% 1440

refresh_pattern -i (/cgi-bin/|\?) 0 0% 0

refresh_pattern . 0 20% 4320

Linux verification


Observing IOS-Squid communication through Wireshark

The following Wireshark snapshots illustrates the two communication tunnels established between the router and Squid as well client-to-Internet traffic redirected from the router to Squid.

3-GRE tunnels

gretunnel

4-Redirected traffic

request

This should give you a starting point from which you can dig deeper into Squid and IOS cooperation.

Reference links

http://www.cisco.com/en/US/docs/ios-xml/ios/ipapp/configuration/15-mt/iap-wccp-v2-ipv6.html#GUID-608CB58E-EDD4-4073-A903-784CFB9AADCA

http://www.squid-cache.org/

http://wiki.squid-cache.org/Features/Wccp2

http://www.squid-cache.org/Versions/v3/3.1/cfgman/

Order of operations : NAT + Routing + ACL


This is the 1st post in the series “router order of operations” and the purpose is to provide a comprehensive but clear enough overview of how operations are performed in the router and implications on what IP addresses to consider particularly when filtering with ACL.

Part1: NAT + Routing

“Routing” & “NAT” represent keystone to understand more complex situations:

Figure1: order of NAT+Routing


Rules:

– Traffic entering inside NAT interface is routed 1st then NATted

– Traffic entering outside NAT interface is NATted 1st then routed

IMPORTANT ===> For outside NAT : Make sure to have a route for the “outside local” to the outside NAT interface, or add the keyword “add-route” at the end of the “ip nat outside source static” command, otherwise, because of the “alias” feature inherited to NAT, the outside interface will respond on behalf of the outside local (if the prefix belongs to the outside interface segment) or will not be routed (if the prefix doesn’t belong to an attached subnet) (1)

Part2: NAT + Routing+ ACL

Figure2: order of NAT+Routing+ACL

Rules:

– Traffic entering inside NAT interface is always routed 1st then NATted.

– Traffic entering outside NAT interface is always NATted 1st then routed.

– Inbound ACL are performed before routing & NAT, alleviate processing overhead by filtering unnecessary traffic.

– Outbound ACL is performed after routing & NAT.

Next follows the practice lab in which, the previously stated rules are demonstrated:

Figure3: Lab topology

Note:

vhost1 and vhost2 routers are simulated inside one single router using VRF-Lite (Figure4), for more information about this technique, refer to the post VRF-Lite as an alternative to VPC

Figure4: end-host deployment


Let’s suppose that the policy is to block ICMP traffic between the inside host 10.0.0.17 and the outside host 192.168.20.146, we will see that the involved IP address in the ACL changes according to the type of translation, the direction of the traffic and the NAT interface on which ACL is applied.

Each time only a single ACL is applied to a single interface, one single icmp packet is generated from inside to outside.

Here is the battery of tests to be done, observe debug results and refer to the associated rules and figures.

Tests :

Inside source Inside NAT interface Outside NAT interface
ACL direction inbound outbound inbound Outbound
Prefix to filter Src=Inside local Dst=Inside local Dst=Outside local Src=Outside local
outside source Inside NAT interface Outside NAT interface
ACL direction inbound outbound inbound outbound
Prefix to filter Dst=Outside local Src=Outside local Src=Outside global Dst=Outside global

A) – inside source NAT

NAT operation:

(inside local = 10.0.0.17) is seen from outside as (inside global = 192.168.20.131)

NAT(config)#ip nat inside source static 10.0.0.17 192.168.20.131

NAT#sh ip nat translations

Pro Inside global Inside local Outside local Outside global

— 192.168.20.131 10.0.0.17 — —

NAT#

For each case ICMP traffic is generated as follow:

Vhost#ping vrf vhost1 192.168.20.146 repeat 1

A1-ACL applied on outside nat interface

A1-a) inbound direction filter prefix dst=outside local

ip access-list ext outsideblock-in

10 deny ip any host 192.168.20.131

20 permit ip any any

interface FastEthernet0/1

ip access-group outsideblock-in in

NAT(config-if)#

*Mar 1 23:26:57.562: IP: tableid=0, s=10.0.0.17 (FastEthernet0/0), d=192.168.20.146 (FastEthernet0/1), routed via FIB

*Mar 1 23:26:57.566: NAT: s=10.0.0.17->192.168.20.131, d=192.168.20.146 [139]

*Mar 1 23:26:57.570: IP: s=192.168.20.131 (FastEthernet0/0), d=192.168.20.146 (FastEthernet0/1), g=192.168.20.130, len 100, forward

*Mar 1 23:26:57.706: IP: s=192.168.20.146 (FastEthernet0/1), d=192.168.20.131, len 100, access denied

Note order of operation: routing->NAT for ICMP echo and the returning traffic is blocked before entering the router.

*** Last outbound interface operation is traffic forwarding to next-hop

A1-b) outbound direction filter prefix src=outside local

ip access-list ext outsideblock-out

10 deny ip host 192.168.20.131 any

20 permit ip any any

interface FastEthernet0/1

ip access-group outsideblock-out out

NAT(config-if)#

*Mar 1 23:34:36.162: IP: tableid=0, s=10.0.0.17 (FastEthernet0/0), d=192.168.20.146 (FastEthernet0/1), routed via FIB

*Mar 1 23:34:36.166: NAT: s=10.0.0.17->192.168.20.131, d=192.168.20.146 [140]

*Mar 1 23:34:36.170: IP: s=192.168.20.131 (FastEthernet0/0), d=192.168.20.146 (FastEthernet0/1), len 100, access denied

NAT(config-if)#

Note the order of operations: routing->NAT, and then ACL blocked it outbound at the outside NAT interface.

A2-acl applied on inside nat interface

A2-a) inbound direction filter prefix src=inside local

ip access-list ext insideblock-in

10 deny ip host 10.0.0.17 any

20 permit ip any any

interface FastEthernet0/0

ip access-group insideblock-in in

Vhost#p vrf vhost1 192.168.20.146 repeat 1

Type escape sequence to abort.

Sending 1, 100-byte ICMP Echos to 192.168.20.146, timeout is 2 seconds:

U

Success rate is 0 percent (0/1)

Vhost#

NAT#

*Mar 1 22:53:08.410: IP: s=10.0.0.17 (FastEthernet0/0), d=192.168.20.146, len 100, access denied

NAT#

The debug confirm that inbound ACL at the inside NAT interface is performed 1st before any other operations and filter the inside local as source of the traffic

A2-b) outbound direction filter prefix dst=inside local

ip access-list ext insideblock-out

10 deny ip any host 10.0.0.17

20 permit ip any any

interface FastEthernet0/0

ip access-group insideblock-out out

Vhost#

Vhost#p vrf vhost1 192.168.20.146 repeat 1

Type escape sequence to abort.

Sending 1, 100-byte ICMP Echos to 192.168.20.146, timeout is 2 seconds:

.

Success rate is 0 percent (0/1)

Vhost#

NAT(config-if)#

*Mar 1 23:14:36.762: IP: tableid=0, s=10.0.0.17 (FastEthernet0/0), d=192.168.20.146 (FastEthernet0/1), routed via FIB

*Mar 1 23:14:36.766: NAT: s=10.0.0.17->192.168.20.131, d=192.168.20.146 [137]

*Mar 1 23:14:36.770: IP: s=192.168.20.131 (FastEthernet0/0), d=192.168.20.146 (FastEthernet0/1), g=192.168.20.130, len 100, forward

*Mar 1 23:14:36.918: NAT*: s=192.168.20.146, d=192.168.20.131->10.0.0.17 [137]

*Mar 1 23:14:36.922: IP: tableid=0, s=192.168.20.146 (FastEthernet0/1), d=10.0.0.17 (FastEthernet0/0), routed via FIB

*Mar 1 23:14:36.926: IP: s=192.168.20.146 (FastEthernet0/1), d=10.0.0.17 (FastEthernet0/0), len 100, access denied

Note the order of operations: Routing=>NAT for ICMP echo, but NAT=>Routing for ICMP reply and outbound ACL at the inside NAT interface

B) – outside source NAT

NAT operation:

(inside local = 10.0.0.17) is seen from outside as (inside global = 192.168.20.131)

(outside global = 192.168.20.146) is seen from inside as (outside local = 10.0.0.35)

As stated in (1) make sure to have a route for the outside local to the outside interface, or add the keywork “add-route” at the end of the “ip nat outside source static” command otherwise because of the “alias” feature inherited to NAT, the outside interface will respond on behalve of 10.0.0.35 (10.0.0.35 belongs to the outside inteface segment)

ip nat outside source static 192.168.20.146 10.0.0.35 add-route

or

ip nat outside source static 192.168.20.146 10.0.0.35

ip route 10.0.0.35 255.255.255.255 fa0/1

NAT(config)#do sh ip nat tra

Pro Inside global Inside local Outside local Outside global

— — — 10.0.0.35 192.168.20.146

— 192.168.20.131 10.0.0.17 — —

NAT(config)#

For each case ICMP traffic is generated from vhost1 (10.0.0.17) toward vhost2 (192.168.20.146) as follow :

Vhost#ping vrf vhost1 10.0.0.35 repeat 1

Here are normal operations without filtering:

NAT(config)#

*Mar 2 02:07:20.597: IP: tableid=0, s=10.0.0.17 (FastEthernet0/0), d=10.0.0.35 (FastEthernet0/1), routed via RIB

*Mar 2 02:07:20.605: NAT: s=10.0.0.17->192.168.20.131, d=10.0.0.35 [204]

*Mar 2 02:07:20.605: NAT: s=192.168.20.131, d=10.0.0.35->192.168.20.146 [204]

*Mar 2 02:07:20.609: IP: s=192.168.20.131 (FastEthernet0/0), d=192.168.20.146 (FastEthernet0/1), g=192.168.20.146, len 100, forward

*Mar 2 02:07:20.721: NAT*: s=192.168.20.146->10.0.0.35, d=192.168.20.131 [204]

*Mar 2 02:07:20.725: NAT*: s=10.0.0.35, d=192.168.20.131->10.0.0.17 [204]

*Mar 2 02:07:20.733: IP: tableid=0, s=10.0.0.35 (FastEthernet0/1), d=10.0.0.17 (FastEthernet0/0), routed via FIB

*Mar 2 02:07:20.737: IP: s=10.0.0.35 (FastEthernet0/1), d=10.0.0.17 (FastEthernet0/0), g=10.0.0.34, len 100, forward

NAT(config)#

Note the order of operations: routing=>NAT then NAT=>Routing for the returning traffic

B1-acl applied on outside nat interface

B1-a) inbound filter prefix src=outside global

ip access-list ext outsideblock-in

10 deny ip host 192.168.20.146 any

20 permit ip any any

interface FastEthernet0/1

ip access-group outsideblock-in in

NAT(config-if)#

*Mar 2 02:16:45.621: IP: tableid=0, s=10.0.0.17 (FastEthernet0/0), d=10.0.0.35 (FastEthernet0/1), routed via RIB

*Mar 2 02:16:45.625: NAT: s=10.0.0.17->192.168.20.131, d=10.0.0.35 [207]

*Mar 2 02:16:45.629: NAT: s=192.168.20.131, d=10.0.0.35->192.168.20.146 [207]

*Mar 2 02:16:45.633: IP: s=192.168.20.131 (FastEthernet0/0), d=192.168.20.146 (FastEthernet0/1), g=192.168.20.146, len 100, forward

*Mar 2 02:16:45.745: IP: s=192.168.20.146 (FastEthernet0/1), d=192.168.20.131, len 100, access denied

B1-b) outbound filter prefix dst=outside global

ip access-list ext outsideblock-out

10 deny ip any host 192.168.20.146

20 permit ip any any

interface FastEthernet0/1

ip access-group outsideblock-out out

NAT(config-if)#

*Mar 2 02:19:31.969: IP: tableid=0, s=10.0.0.17 (FastEthernet0/0), d=10.0.0.35 (FastEthernet0/1), routed via RIB

*Mar 2 02:19:31.973: NAT: s=10.0.0.17->192.168.20.131, d=10.0.0.35 [208]

*Mar 2 02:19:31.977: NAT: s=192.168.20.131, d=10.0.0.35->192.168.20.146 [208]

*Mar 2 02:19:31.981: IP: s=192.168.20.131 (FastEthernet0/0), d=192.168.20.146 (FastEthernet0/1), len 100, access denied

B2- acl applied on inside nat interface

B2-a) inbound filter prefix dst=outside local

ip access-list ext insideblock-in

10 deny ip any host 10.0.0.35

20 permit ip any any

interface FastEthernet0/0

ip access-group insideblock-in in

NAT(config-if)#

*Mar 2 02:10:45.613: IP: s=10.0.0.17 (FastEthernet0/0), d=10.0.0.35, len 100, access denied

B2-b) outbound filter prefix src=outside local

ip access-list ext insideblock-out

10 deny ip host 10.0.0.35 any

20 permit ip any any

interface FastEthernet0/0

ip access-group insideblock-out out

NAT(config-if)#

*Mar 2 02:12:11.393: IP: tableid=0, s=10.0.0.17 (FastEthernet0/0), d=10.0.0.35 (FastEthernet0/1), routed via RIB

*Mar 2 02:12:11.397: NAT: s=10.0.0.17->192.168.20.131, d=10.0.0.35 [206]

*Mar 2 02:12:11.401: NAT: s=192.168.20.131, d=10.0.0.35->192.168.20.146 [206]

*Mar 2 02:12:11.405: IP: s=192.168.20.131 (FastEthernet0/0), d=192.168.20.146 (FastEthernet0/1), g=192.168.20.146, len 100, forward

*Mar 2 02:12:11.517: NAT*: s=192.168.20.146->10.0.0.35, d=192.168.20.131 [206]

*Mar 2 02:12:11.517: NAT*: s=10.0.0.35, d=192.168.20.131->10.0.0.17 [206]

*Mar 2 02:12:11.525: IP: tableid=0, s=10.0.0.35 (FastEthernet0/1), d=10.0.0.17 (FastEthernet0/0), routed via FIB

*Mar 2 02:12:11.529: IP: s=10.0.0.35 (FastEthernet0/1), d=10.0.0.17 (FastEthernet0/0), len 100, access denied

Conclusion

– Write down your expectations in term of address translation, routing and filtering.

– Make sure to choose your IP addresses to filter, the ACL direction and the interface to which ACL is applied with the order of operations in mind.

%d bloggers like this: