Deploying F5 BIG-IP LTM VE within GNS3 (part-1)

One of the advantages of deploying VMware (or VirtualBox) machines inside GNS3, is the available rich networking infrastructure environment. No need to hassle yourself about interface types, vmnet or private? Shared or ad-hoc?

In GNS3 it is as simple and intuitive as choosing  a node interface and connect it to whatever other node interface.

In this lab, we are testing basic F5 BIG-IP LTM VE deployment within GNS3. The only Virtual machine used in this lab is F5 BIG-IP all other devices are docker containers: 

  • Nginx Docker containers for internal web servers.
  • ab(Apache Benchmark) docker container for the client used for performence testing.
  • gns3/webterm containers used as Firefox browser for client testing and F5 web management.



  1. Docker image import
  2. F5 Big-IP VE installation and activation
  3. Building the topology
  4. Setting F5 Big-IP interfaces
  5. Connectivity check between devices
  6. Load balancing configuration
  7. Generating client http queries
  8. Monitoring Load balancing

Devices used:


  • Debian host GNU/Linux 8.5 (jessie)
  • GNS3 version 1.5.2 on Linux (64-bit)

System requirements:

  • F5 Big IP VE requires 2GB of RAM (recommended >= 8GB)
  • VT-x / AMD-V support

The only virtual machine used in the lab is F5 Big-IP, all other devices are Docker containers.


1.Docker image import

Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers  and then “New”.

Choose “New image” option and type the Docker image in the format provided in “devices used” section <account/<repo>>, then choose a name (without the slash “/”).


By default GNS3 derives a name in the same format as the Docker registry (<account>/repository) which can cause an error in some earlier versions. In the latest GNS3 versions, the slash “/” is removed from the derived name.



Set the number of interfaces to eight and accept default parameters with “next” until “finish”.

– Repeat the same procedure for gns3/webterm


Choose a name for the image (without slash “/”)


Choose vnc as the console type to allow Firefox (GUI) browsing


And keep the remaining default parameters.

– Repeat the same procedure for the image ajnouri/nginx.

Create a new image with name ajnouri/nginx


Name it as you like


And keep the remaining default parameters.

2. F5 Big-IP VE installation and activation


– From F5 site, import the file BIG- Go to


You’ll have to request a registration key for the trial version that you’ll receive by email.


Open the ova file in VMWare workstation.

– To register the trial version, bridge the first interface to your network connected to Internet.


– Start the VM and log in to the console with root/default


– Type “config” to access the text user interface.

selection_345– Choose “No” to configure a static IP address, a mask and a default gateway in the same subnet of the bridged network. Or “Yes” if you have a dhcp server and want to get a dynamic IP.

– Check the interface IP.


– Ping an Internet host ex: to verify the connectivity and name resolution.

– Browse Big-IP interface IP and accept the certificate.


Use admin/admin for login credentials


Put the key you received by email in the Base Registration Key field and push “Next”, wait a couple of seconds for activation and you are good to go.

Now you can shutdown F5 VM.


3. Building the topology


Importing F5 VE Virtual machine to GNS3

From GNS3 “preference” import a new VMWare virtual machine


Choose BIG-IP VM we have just installed and activated


Make sure to set minimum 3 adapters and allow GNS3 to use any of the VM interfaces



Now we can build our topology using BIG-IP VM and the Docker images installed.

Below, is an example of topology in which we will import F5 VM and put some containers..


– 3 nginx containers

– 1 Openvswitch


– GUI browser webterm container

– 1 Openvswitch

– Cloud mapped to host interface tap0


– Apache Benchmark (ab) container

– GUI browser webterm container


Notice the BIG-IP VM interface e0, the one priorly bridged to host network, is now connected to a browser container for management.

I attached the host interface “tap0” to the management switch because, for some reasons, without it, arp doesn’t work on that segment.


Address configuration:

– Assign each of the nginx container an IP in subnet of your choice (ex:


In the same manner: for ajnouri/nginx-2 for ajnouri/nginx-3


On all three nginx containers, start nginx and php servers:

service php5-fpm start

service nginx start


– Assign an IP to the management browser in the same subnet as BIG-IP management IP for gns3/webterm-2

– Assign addresses default gateway and dns server to ab container and webterm-1 containers



And make sure both client devices resolve  ajnouri.local host to BIG-IP address

echo "   ajnouri.local" >> /etc/hosts

– Openvswitch containers don’t need to be configured, it acts like a single vlan.

– Start the topology


4. Setting F5 Big-IP interfaces


To manage the load balancer from the webterm-2, open the console to the container, this will open a Firefox from the container .


Browse the VM management IP and exception for the certificate and log in with F5 BigIP default credentials admin/admin.


Go through the initial configuration steps

– You will have to set the hostname (ex: ajnouri.local), change the root and admin account passwords


You will be logged out to take into account password changes, log in back

– For the purpose of this lab, not redundancy not high availability

– Now you will have to configure internal (real servers) and external (client side) vlans and associated interfaces and self IPs.

(Self IPs are the equivalent of VLAN interface IP in Cisco switching)

Internal VLAN (connected to web servers):


External VLAN (facing clients):


5. Connectivity check between devices


Now make sure you have successful connectivity from each container to the corresponding Big-IP interface.

Ex: from ab container


Ex: from nginx-1 server container



The interface connected to your host network will get ip parameters (address, gw and dns) from your dhcp server.


6. Load balancing configuration


Back to the browser webterm-2

For BIG-IP to load balance http requests from client to the servers, we need to configure:

  • Virtual Server: single entity (virtual server) visible to client0
  • Pool : associated to the Virtual server and contains the list of real web servers to balance between
  • Algorithm used to load balance between members of the pool

– Create a pool of web servers “Pool0” with “RoundRobin” as algorithm and http as protocol to monitor the members.


-Associate to the virtual server “VServer0” to the pool “Pool0”


Check the network map to see if everything is configured correctly and monitoring shows everything OK (green)


From client container webterm-1, you can start a firefox browser (console to the container) and test the server name “ajnouri/local”


If everything is ok, you’ll see the php page showing the real server ip used, the client ip and the dns name used by the client.

Everytime you refresh the page, you’ll see a different server IP used.


7. Performance testing


with Apache Benchmark container ajnouri/ab, we can generate client request to the load balancer virtual server by its hostname (ajnouri.local).

Let’s open an aux console to the container ajnouri/ab and generate 50.000 connections with 200 concurrent ones to the url ajnouri.local/test.php

ab -n 50000 -c 200 ajnouri.local/test.php



8. Monitoring load balancing


Monitoring the load balancer performance shows a peek of connections corresponding to Apache benchmark generated requests


In the upcoming part-2, the 3 web server containers are replaced with a single container in which we can spawn as many servers as we want (Docker-in-Docker) as well as test a custom python client script container that generates http traffic from spoofed IP addresses as opposed to a container  (Apache Benchmark) that generate traffic from a single source IP.

Deploying Cisco traffic generator in GNS3

Goal: Deploy TRex, a realistic Cisco traffic generator, to test devices in GNS3.

TRex traffic generator is a tool designed to benchmark platforms using realistic traffic.
One of the tools through which TRex can be learned and tested is a virtual machine instance, fully simulating TRex without the need for any additional hardware.

The TRex Virtual Machine is based on Oracle’s Virtual Box freeware.
It is designed to enable TRex newbies to explore this tool without any special resources.

Download the virtual appliance ova file:

Open the image in VMWare (I am using VMWare workstation)

From GNS3 import the VMWare device:

Edit the VM template and make sure to select “Allow GNS3 to use any configured VMware adapter”


Insert the a device to test, DUT (Device Under Test), in our case it is a Cisco IOU router and build the following topology, in which TRex will play the role of the client and the server for the generated traffic.



Because TRex doesn’t implement ARP, we have to manually indicate the router MAC addresses of the directly connected interfaces.
You can set TRex to match the DUT MACs or DUT to match the default MAC configured on TRex. We opt for the first solution:

Note the router interface MAC addresses:


Login to TRex through the console:

  • Username: trex
  • Password: trex

and edit Trex configuration file


and change the DUT MACs

Screenshot - 260716 - 23:33:48

Make sure the list of interfaces ids match the ones defined by

cd v1.62

sudo ./ –status


We also need to set our router under test with the MAC addersses used by TRex for the traffic.

On the IOU router:

IOU1(config-if)#int e0/0
IOU1(config-if)#ip address
IOU1(config-if)#du fu
IOU1(config-if)#no sh
IOU1(config-if)#int e0/1
IOU1(config-if)#ip address
IOU1(config-if)#du fu
IOU1(config-if)#no sh

IOU1(config)#arp  0800.2723.21dc ARPA
IOU1(config)#arp  0800.2723.21dd ARPA
IOU1(config)#do sh arp
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet            –   0800.2723.21dc  ARPA
Internet            –   0800.2723.21dd  ARPA

e0/1 and e0/2 IP addresses are configured with and In fact it doesn’t matter for TRex because we have routes to forward traffic out the appropriate interfaces to reach TRex interfaces.

On the router set routes to the emulated client and servers:

ip route
ip route

For this lab we will generate IMIX traffic (64byte UDP packets profile) from emulated clients and servers using virtual IP range configurable in 16.0.0.[1-255] and 48.0.[0.1-255.255]

Back to TRex:



So let’s configure our router to route traffic destined to previous ranges out the appropriate interfaces.

IOU router:

IOU1(config)#ip route
IOU1(config)#ip route

Start the emulation on Trex:

sudo ./t-rex-64 -f cap2/imix_64.yaml  -d 60 -m 40000  -c 1


You can observe the generated traffic passing through the router with Wireshark


For more inf. please refer to


GNS3 + Docker: Internet modem container

Goal: Deploy internet modem for GNS3 topology using Docker container. The container uses iptables to perform NAT (masquerading) and dnsmasq as DHCP server for LAN interfaces.

Used Docker images:

GNS3 host preparation : This is performed on GNS3 linux host

From GNS3 host console, create a tap interface (tap0) and put it along with the physical interface (eth0) in a bridge (ex: ovsbr0):

ip tuntap add dev tap0 mode tap user <username>

sudo ovs-vsctl add-br ovsbr0

sudo ovs-vsctl add-port ovsbr0 tap0

You can use either linux bridge (brctl command) or OpenVswitch bridge (ovs-vsctl command)

sudo ovs-vsctl show


Bridge “ovsbr0”

Port “tap0”

Interface “tap0”

Port “ovsbr0”

Interface “ovsbr0”

type: internal

Port “eth0”

Interface “eth0”

ovs_version: “2.3.0”

Remove ip address from eth0 (or release dhcp parameters) then reconfigure IP address and default gateway (or request dhcp) for the ovs bridge ovsbr0

Import containers

1- Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers and then “New”.

Choose “New image” option and the name ajnouri/internet

Screenshot - 170716 - 18:49:03

Accept all default parameters.

2- Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers and then “New”.

Choose “New image” option and the name gns3/openvswitch

Screenshot - 170716 - 18:49:12

Set the number of interfaces to eight and accept default parameters with “next” until “finish”.

3- Same for end host container. From GNS3, create new docker template Edit > Preferences > Docker > Docker containers and then “New”.

Choose “New image” option and the name gns3/endhost.

Screenshot - 170716 - 18:49:21

Next you can choose a template name for the container, in this case I renamed it as “dvpc”.

Accept default parameters with “next” until “finish”.

GNS3 Topology

Insert a cloud to the topology and map it to tap0

Screenshot - 170716 - 18:49:31

Build the below topology

Screenshot - 170716 - 18:49:40

Configure containers network interfaces:

Internet container ajnouri/Internet-1

Screenshot - 170716 - 18:50:33

End host container dvpc-1

Screenshot - 170716 - 18:50:49

The WAN interface of the Internet container should have been assigned an IP and gateway from your physical network (connected to internet).

Start the script from /data directory

You will be asked to set the LAN and WAN interfaces as well as the IP range for dhcp clients connected to LAN interface, then the script will start dnsmasq and set iptables for NAT (masquerade)

ajnouri/internet-1 console

Screenshot - 170716 - 18:51:15

ajnouri/dvpc-1 console

Screenshot - 170716 - 18:51:37

Other dhcp parameters assigned to the client are taken from Internet device WAN interface DHCP parameters.

Connectivity check


Let’s have fun! Now that we have internet connectivity, install a text-based browser package on the end host container


Start elinks and browse Internet


For more comfortable browsing experience, you can use the image gns3/webterm.

Create a new Docker template


Choose vnc as the console type to allow GUI browsing of Firefox


And keep the remaining default parameters.

Insert the image and connect it to the topology as follow:


Set the container interface for dhcp client


Start the stopped containers and console (vnc) to Webterm container.

(gns3/openvswitch doesn’t need any configuration)


You should get this







DockerVPC: Using containers in GNS3 as Linux Virtual hosts instead of VPCS

More updated content about GNS3 and natively integrated Docker.


I would like to share with you DockerVPC, a bash script that helps running containers for use within GNS3 as rich virtual end-host instead of VPCS.

I am using it to avoid dealing directly with docker commands and container id’s each time I would like to rapidly deploy some disposable end-host containers inside GNS3.

For now it runs only on linux platforms.  and tested on Ubuntu, RedHat and OpenSUSE.

Using DockerVPC doesn’t require knowledge of Docker containers, still I encourage you to take a look at this short introduction.

By the way, VIRL in its recent updates introduced lxc containers to simulate Ubuntu server (multiprocess environment) as well as single process container for iperf.

It is possible to implement docker containers on Windows or Mac OS X 
using lightweight boot2docker virtual machine or the newer Docker tool Kitematic,, 
The issue is that, there is no such tool as pipework for windows or Mac to set additional interfaces.
I use this is a temporary solution knowing that, Docker is on the way to 
be integrated to GNS3, until then, you can already take maximum profit 
of containers inside GNS3. (See Issues and limitations below)

The linux image used by DockerVPC is pre-built with the following apps:

  • SSH server.
  • Apache + PHP
  • Ostinato / D-ITG / Iperf.
  • BIRD Internet routing daemon.
  • Linphone / sipp / pjsua. (VoIP host-to-host through GNS3 works perfectly)
  • IPv6 THC tools.
  • VLC (VideoLAN).
  • Qupzilla browser + java & html5 plugins / links.
  • vSFTPd server + ftp client.
  • And many other tools: inetutils-traceroute, iputils-tracepath, mtr..

Which makes it almost a full-fledged Linux host.


By default containers are connected to the host through docker0 bridge, this tool allows you to connect the running containers to GNS3 through additional bridge interfaces so you can bind them to cloud elements in your GNS3 topology. In other words, containers run independently of GNS3. More on that in Simple lab.

Additionally, this script allows you to separately manage additional container images like cacti server or a 16-port (host bridges) OpenVSwitch.

For now, all you have to do is install the required applications and clone the repository

Installing requirements

You will need: git, docker, pipework and lxterminal.


sudo apt-get install git

2.Docker easy to install 

docker -v
Docker version 1.8.1, build d12ea79

3.pipework, a simple yet powerful bash script, for advanced docker networking 

sudo bash -c “curl > /usr/local/bin/pipework”
sudo chmod a+x /usr/local/bin/pipework


lxterminal is not required anymore, the script will detect the used terminal and use it to open interactive terminal access to containers.

To use docker as non-root user

sudo usermod -aG docker {user}

Clone DockerVPC repository

git clone

cd DockerVPC

Here are some examples (on my GNS3 community blog) of how to use DockerVPC container with GNS3.

Once the installation is done and the images pulled, creating virtual end-hosts is a matter of seconds.

DockerVPC labs

Issues and limitations:

  • Originally, docker containers are not meant to run GUI applications, this is a workaround brought by docker community (by mounting docker host X11 and sound devices), so we must expect some issues with that.
  • By default, Docker networking uses a single interface bridged to docker0. So, using additional container interfaces will bring additional complexity to networking configuration.
  • DockerVPC is relying on pipework, an external script for advanced networking. Though this is an advantage comparing to the limited (for now) integrated networking functionalities, it brings new challenges.
  • Bridge interfaces created with pipework do not persist after stopping the container or docker host reboot, so make sure to reconfigure your container networking parameters after you restart a stopped container.

This brings us to the conclusion that using Docker containers this way, it is NOT MEANT FOR PRODUCTION !!!

The purpose of DockerVPC is to hopefully give GNS3 users more flexibility with end-host simulation.

Hope you will find it useful!


Further readings:

Routing between Docker containers using GNS3.

The idea is to route (IPv4 and IPv6) between Dockers containers using GNS3 and use them as end-hosts instead of Virtual Machines.

Containers use only the resources necessary for the application they run. They use an image of the host file system and can share the same environment (binaries and libraries).

In the other hand, virtual machines require entire OS’s, with reserved RAM and disk space.

Virtual machines vs Docker containers

Virtual machines vs Docker containers


If you are not familiar with Docker, I urge you to take a look at the below excellent short introduction and some additional explanation from Docker site. :



As for now, Docker has limited networking functionalities. This is where pipework comes to the rescue. Pipework allows more advanced networking settings like adding new interfaces, IP’s from a different subnets and set gateways and many more…

To be able to route between the containers using your own GNS3 topology (the sky the limit!), pipework allows to create a new interface inside a running container, connect it to a host bridge interface, give it an IP/mask in any subnet you want and set a default gateway pointing to a device in GNS3. Consequently all egress traffic from the container is routed to your GNS3 topology.


GNS3 connection to Docker a container

GNS3 connection to Docker a container


How pipework connects exposes container network

How pipework connects exposes container network

Lab requirements:


sudo bash -c "curl\
 > /usr/local/bin/pipework"

For each container, we will generate docker image, run a container with an interactive terminal and set networking parameters (IP and default gateway).

To demonstrate docker flexibility, we will use 4 docker containers with 4 different subnets:



This is how containers are built for this lab:




Here is the general workflow for each container.

1- build image from Dockerfile (

An image is readonly.

sudo docker build -t <image-tag> .

Or (docker v1.5) sudo docker build -t <image-tag> <DockerfileLocation>

2- Run the built image:

Spawn and run a writable container with interactive console.

The parameters of this command may differ slightly for each GUI containers.

sudo docker run -t -i <image id from `sudo docker images`> /bin/bash

3- Set container networking:

Create host bridge interface and link to a new interface inside the container, assign to it an IP and a new default gateway.

sudo pipework <bridge> -i <int> <container if from `sudo docker ps`> <ip/mask>@<gateway-ip


To avoid manipulating image id’s and container id’s for each of the images and the containers, I use a bash script to build and run all containers automatically:


IMGLIST="$(sudo docker images | grep mybimage | awk '{ print $1; }')"
[[ $IMGLIST =~ "mybimage" ]] && sudo docker build -t mybimage -f phusion-dockerbase .
[[ $IMGLIST =~ "myapache" ]] && sudo docker build -t myapache -f apache-docker .
[[ $IMGLIST =~ "myfirefox" ]] && sudo docker build -t myfirefox -f firefox-docker .

BASE_I1="$(sudo docker images | grep mybimage | awk '{ print $3; }')"
lxterminal -e "sudo docker run -t -i --name baseimage1 $BASE_I1 /bin/bash"
sleep 2
BASE_C1="$(sudo docker ps | grep baseimage1 | awk '{ print $1; }')"
sudo pipework br4 -i eth1 $BASE_C1 

BASE_I2="$(sudo docker images | grep mybimage | awk '{ print $3; }')"
lxterminal -e "sudo docker run -t -i --name baseimage2 $BASE_I2 /bin/bash"
sleep 2
BASE_C2="$(sudo docker ps | grep baseimage2 | awk '{ print $1; }')"
sudo pipework br5 -i eth1 $BASE_C2 

APACHE_I1="$(sudo docker images | grep myapache | awk '{ print $3; }')"
lxterminal -t "Base apache" -e "sudo docker run -t -i --name apache1 $APACHE_I1 /bin/bash"
sleep 2
APACHE_C1="$(sudo docker ps | grep apache1 | awk '{ print $1; }')"
sudo pipework br6 -i eth1 $APACHE_C1 

lxterminal -t "Firefox" -e "sudo docker run -ti --name firefox1 --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix myfirefox"
sleep 2
FIREFOX_C1="$(sudo docker ps | grep firefox1 | awk '{ print $1; }')"
sudo pipework br7 -i eth1 $FIREFOX_C1


And we end up with the following conainers:

Containers, images and dependencies.

Containers, images and dependencies.



All you have to do is to bind a separate cloud to each bridge interface (br4,br5,br6 and br7) created by pipework, and then connect them to the appropriate segment in your topology.


Lab topology

Lab topology

Note that GNS3 topology is already configured for IPv6, so as soon as you start the routers, Docker containers will be assigned IPv6 addresses from the routers through SLAAC (Stateles Auto Configuration) which makes them reachable through IPv6.


Here is a video on how to launch the lab:


Cleaning up

To clean your host from all containers and images use the following bash script: which uses the below docker commands:

Stop running containers:

  • sudo docker stop <container id’s from `sudo docker ps`>

Remove the stopped container:

  • sudo docker rm <container id’s from `sudo docker ps -a`>

Remove the image:

  • sudo docker rmi <image id’s from `sudo docker images`>
sudo ./
Stopping all running containers...
Removing all stopped containers...
Erasing all images...
Make sure you are generating image from a Dockerfile
or have pushed your images to DockerHub.
*** Do you want to continue? No

I answered “No”, because I still need those images to spawn containers, you can answer “Yes” to the question if you don’t need the images anymore or if you need to change the images.




pipework for advanced Docker networking:

Running firefox inside Docker container:


3D model shipping container:

EIGRP SIA (Stuck-In-Active) through animations.

EIGRP SIA (Stuck-In-Active) process through animations:

“Active” = Actively looking for a route to a network (Successor)

Without SIA

Browse in separate page

With SIA

Browse in separate page

IOS server load balancing with mininet server farm

The idea is to play with IOS load balancing mechanism using large number of “real” servers (50 servers), and observe the difference in behavior between different load balancing algorithms.

Due to resource scarcity in the lab environment, I use mininet to emulate “real” servers.

I will stick to the general definition for load balancing:

A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications.

The publically announced IP address of the server is called Virtual IP (VIP). Behind the scene, the server services are provided not by a single server but a cluster of servers,real” servers with their real IP’s (RIP) hidden from outside world.

The Load Balancer, IOS SLB in our case, distributes user connections, sent to the VIP, to the real servers according to the load balancing algorithm.

Figure1: Generic load balancing

Figure1: Generic load balancing


Figure2: High-Level-Design Network topology

Figure2: High-Level-Design Network topology


Load balancing algorithms:

The tested load balancing algorithms are:

  • Weighted Round Robin (with equal weights for all real servers): New connections to the virtual SRV are directed to real servers equally in a circular fashion (Default weight = 8 for all servers).
  • Weighted Round Robin (with inequal weights): New connection to the virtual SRV are directed to real servers proportionally to their weights.
  • Weighted Least Connections: New connection to the virtual SRV are directed to real servers with the fewest number of active connections


Session redirection modes:

Dispatched NAT Virtual IP configured on ALL real servers as  loopback or secondary.Real servers are layer2-adjacent to SLB.SLB redirect traffic to real servers at MAC layer.
Directed VIP can be unknown to real servers.NO FTP/FW support.Support server NAT for ESP/GRE virtual servers.Use NAT to translate VIP => RIP.
Server NAT VIP translated to RIP and vice-versa.Real servers not required to be directly connected.
Client NAT Used for multiple SLBs.Replace client IP with one of the SLB IP to guarantee handling the returning traffic by the same SLB.
Static NAT Use static NAT for traffic from real server responding to clients Real servers (ex: in the same ethernet) use their own IP

The lab deploys Directed session redirection with server NAT.


IOS SLB configuration:

The configuration of load balancing in Cisco is IOS is pretty straightforward

Server Farm(Required) ip slb serverfarm <serverfarm-name>
Load-Balancing Algorithm (Optional) predictor [roundrobin | leastconns]
Real Server (Required) real <ip-address>
Enabling the Real Server for Service (Required) inservice
Virtual Server(Required) ip slb vserver virtserver-name
Associating a Virtual Server with a Server Farm (Required) serverfarm serverfarm-name
Virtual Server Attributes (Required)Specifies the virtual server IP address, type of connection, port number, and optional service coupling. virtual ip-address {tcp | udp} port-number [service service-name]
Enabling the Virtual Server for Service (Required) inservice


GNS3 lab topology

The lab is running on GNS3 with mininet VM and the host generating client traffic.

Figure3: GNS3 topology

Figure3: GNS3 topology


Building mininet VM server farm

mininet VM preparation:

  • Bridge and attach guest mininet VM interface to the SLB device.
  • Bring up the VM interface, without configuring any IP address.


Because I am generating user traffic from the host machine, I need to configure static routing pointing to GNS3 subnets and the VIP:

&lt;pre&gt;sudo ip a a dev tap2
sudo ip a a via
sudo ip a a via

mininet python API script:

The script builds mininet machines and set their default gateways to GNS3 IOS SLB device IP and start UDP server on port 5555 using netcat utility.

&lt;pre&gt;ip route add default via
nc -lu 5555 &amp;

Here is the python mininet API script:


import re
from import Mininet
from mininet.node import Controller
from mininet.cli import CLI
from import Intf
from mininet.log import setLogLevel, info, error
from mininet.util import quietRun

def checkIntf( intf ):
&quot;Make sure intf exists and is not configured.&quot;
if ( ' %s:' % intf ) not in quietRun( 'ip link show' ):
error( 'Error:', intf, 'does not exist!\n' )
exit( 1 )
ips = re.findall( r'\d+\.\d+\.\d+\.\d+', quietRun( 'ifconfig ' + intf ) )
if ips:
error( 'Error:', intf, 'has an IP address and is probably in use!\n' )
exit( 1 )

def myNetwork():

net = Mininet( topo=None, build=False)

info( '*** Adding controller\n' )

info( '*** Add switches\n')
s1 = net.addSwitch('s1')

max_hosts = 50
newIntf = 'eth1'

host_list = {}

info( '*** Add hosts\n')
for i in xrange(1,max_hosts+1):
host_list[i] = net.addHost('h'+str(i))
info( '*** Add links between ',host_list[i],' and s1 \r')
net.addLink(host_list[i], s1)

info( '*** Checking the interface ', newIntf, '\n' )
checkIntf( newIntf )

switch = net.switches[ 0 ]
info( '*** Adding', newIntf, 'to switch',, '\n' )
brintf = Intf( newIntf, node=switch )

info( '*** Starting network\n')

for i in xrange(1,max_hosts+1):
info( '*** setting default gateway &amp; udp server on ', host_list[i], '\r' )
host_list[i].cmd('ip r a default via')
host_list[i].cmd('nc -lu 5555 &amp;')


if __name__ == '__main__':
setLogLevel( 'info' )



UDP traffic generation using scapy

I used scapy to emulate client connections from random IP addresses

Sticky connections:

Sticky connections are connections from the same client IP address or subnet and for a given period of time should be assigned to the same previous real server.

The sticky objects created to track client assignments are kept in the database for a period of time defined by sticky timer.

If both conditions are met : 

  • A connection for the same client already exists.
  • the amount of time between the end of a previous connection from the client and the start of the new connection is within the timer duration.

The server assigns the client connection to the same real server.

Router(config-slb-vserver)# sticky duration [group group-id]

A FIFO queue is used to emulate sticky connections. The process is triggered randomly.

If the queue is not full, the ramdomly generated source IP addresses is pushed to the queue, otherwise, an IP is pulled from the queue to be used, a second time, as source of the generated packet.

Figure4: Random Genetation of  sticky connections

Figure4: Random Genetation of sticky connections

&lt;pre&gt;#! /usr/bin/env python

import random
from scapy.all import *
import time
import Queue

# (2014) AJ NOURI

dsthost = ''

q = Queue.Queue(maxsize=5)

for i in xrange(1000):
rint = random.randint(1,10)
if rint % 5 == 0:
print '==&gt; Random queue processing'
if not q.full():
ipin = &quot;.&quot;.join(map(str, (random.randint(0, 255) for _ in range(4))))
srchost = ipin
print ipin,' into the queue'
ipout = q.get()
srchost = ipout
print ' *** This is sticky src IP',ipout
srchost = &quot;.&quot;.join(map(str, (random.randint(0, 255) for _ in range(4))))
print 'one time src IP', srchost
#srchost = scapy.RandIP()
p = IP(src=srchost,dst=dsthost) / UDP(dport=5555)
print 'src= ',srchost, 'dst= ',dsthost
send(p, iface='tap2')
print 'sending packet\n'


Randomly, the generated source IP used for the packet and in the same time pushed to the queue if it is not yet full:

one time src IP
src= dst=
Sent 1 packets. 

one time src IP
src= dst=
Sent 1 packets.

==&gt; Random queue processing  into the queue
src= dst=
Sent 1 packets.

otherwise, an IP (previously generated) is pulled out from the queue and reused as source IP.

==&gt; Random queue processing
 *** This is sticky src IP
src= dst=
Sent 1 packets.

Building Mininet server farm

ajn@ubuntu:~$ sudo python
[sudo] password for ajn:
Sorry, try again.
[sudo] password for ajn:
*** Adding controller
*** Add switches
*** Add hosts
*** Checking the interface eth1 1
*** Adding eth1 to switch s1
*** Starting network
*** Configuring hosts
h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16 h17 h18 h19 h20 h21
 h22 h23 h24 h25 h26 h27 h28 h29 h30 h31 h32 h33 h34 h35 h36 h37 h38 h39
 h40 h41 h42 h43 h44 h45 h46 h47 h48 h49 h50
*** Starting controller
*** Starting 1 switches
*** Starting CLI:lt gateway &amp; udp server on h50


Weighted Round Robin (with equal weights):


IOS router configuration
ip slb serverfarm MININETFARM
 nat server
ip slb vserver VSRVNAME
 virtual udp 5555
 serverfarm MININETFARM
 sticky 5
 idle 300


Starting traffic generator
ajn:~/coding/python/scapy$ sudo python
one time src IP
src= dst=
Sent 1 packets.

sending packet
one time src IP
src= dst=
Sent 1 packets.

sending packet
one time src IP
src= dst=
Sent 1 packets.

sending packet
one time src IP
src= dst=
Sent 1 packets.

sending packet
==&gt; Random queue processing into the queue
src= dst=
Sent 1 packets.


The router has already started associating incoming UDP connections to real server according to the LB algorithm.

Router IOS SLB
SLB#sh ip slb stick 

client netmask group real conns
----------------------------------------------------------------------- 4097 1 4097 1 4097 1 4097 1 4097 1 4097 1 4097 1 4097 1 4097 1 4097 1 4097 1
Result: (accelerated video)

Weighted Round Robin (with unequal weights):

Let’s suppose we need to assign a weight of 16, twice the default weight, to each 5th server: 1, 5, 10, 15…


IOS router configuration
ip slb serverfarm MININETFARM
 nat server
 weight 16
 weight 16
Result: (accelerated video)

Least connection:


IOS router configuration

ip slb serverfarm MININETFARM
 nat server
 predictor leastconns
 weight 16
Result: (accelerated video)


Stopping Mininet Server farm
mininet&gt; exit
*** Stopping 1 switches
s1 ..................................................
*** Stopping 50 hosts
h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16 h17 h18 h19 h20
 h21 h22 h23 h24 h25 h26 h27 h28 h29 h30 h31 h32 h33 h34 h35 h36 h37
 h38 h39 h40 h41 h42 h43 h44 h45 h46 h47 h48 h49 h50
*** Stopping 1 controllers
*** Done


%d bloggers like this: