Deploying F5 BIG-IP LTM VE within GNS3 (part-1)


One of the advantages of deploying VMware (or VirtualBox) machines inside GNS3, is the available rich networking infrastructure environment. No need to hassle yourself about interface types, vmnet or private? Shared or ad-hoc?

In GNS3 it is as simple and intuitive as choosing  a node interface and connect it to whatever other node interface.

In this lab, we are testing basic F5 BIG-IP LTM VE deployment within GNS3. The only Virtual machine used in this lab is F5 BIG-IP all other devices are docker containers: 

  • Nginx Docker containers for internal web servers.
  • ab(Apache Benchmark) docker container for the client used for performence testing.
  • gns3/webterm containers used as Firefox browser for client testing and F5 web management.

 

Outline:

  1. Docker image import
  2. F5 Big-IP VE installation and activation
  3. Building the topology
  4. Setting F5 Big-IP interfaces
  5. Connectivity check between devices
  6. Load balancing configuration
  7. Generating client http queries
  8. Monitoring Load balancing

Devices used:

Environment:

  • Debian host GNU/Linux 8.5 (jessie)
  • GNS3 version 1.5.2 on Linux (64-bit)

System requirements:

  • F5 Big IP VE requires 2GB of RAM (recommended >= 8GB)
  • VT-x / AMD-V support

The only virtual machine used in the lab is F5 Big-IP, all other devices are Docker containers.

 

1.Docker image import

Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers  and then “New”.

Choose “New image” option and type the Docker image in the format provided in “devices used” section <account/<repo>>, then choose a name (without the slash “/”).

Note:

By default GNS3 derives a name in the same format as the Docker registry (<account>/repository) which can cause an error in some earlier versions. In the latest GNS3 versions, the slash “/” is removed from the derived name.


Installing 
gns3/openvswitch:

selection_293
selection_278

Set the number of interfaces to eight and accept default parameters with “next” until “finish”.

– Repeat the same procedure for gns3/webterm

selection_372

Choose a name for the image (without slash “/”)

selection_340

Choose vnc as the console type to allow Firefox (GUI) browsing

selection_373

And keep the remaining default parameters.

– Repeat the same procedure for the image ajnouri/nginx.

Create a new image with name ajnouri/nginx

selection_339

Name it as you like

selection_340

And keep the remaining default parameters.

2. F5 Big-IP VE installation and activation

 

– From F5 site, import the file BIG-11.3.0.0-scsi.ova. Go to https://www.f5.com/trial/big-ip-ltm-virtual-edition.php

selection_341

You’ll have to request a registration key for the trial version that you’ll receive by email.

selection_342

Open the ova file in VMWare workstation.

– To register the trial version, bridge the first interface to your network connected to Internet.

selection_343

– Start the VM and log in to the console with root/default

selection_344

– Type “config” to access the text user interface.

selection_345– Choose “No” to configure a static IP address, a mask and a default gateway in the same subnet of the bridged network. Or “Yes” if you have a dhcp server and want to get a dynamic IP.

– Check the interface IP.

selection_346

– Ping an Internet host ex: gns3.com to verify the connectivity and name resolution.

– Browse Big-IP interface IP and accept the certificate.

selection_347

Use admin/admin for login credentials

selection_348

Put the key you received by email in the Base Registration Key field and push “Next”, wait a couple of seconds for activation and you are good to go.

selection_349
Now you can shutdown F5 VM.

 

3. Building the topology

 

Importing F5 VE Virtual machine to GNS3

From GNS3 “preference” import a new VMWare virtual machine

selection_350

Choose BIG-IP VM we have just installed and activated

selection_351

Make sure to set minimum 3 adapters and allow GNS3 to use any of the VM interfaces

selection_352

Topology

Now we can build our topology using BIG-IP VM and the Docker images installed.

Below, is an example of topology in which we will import F5 VM and put some containers..

Internal:

– 3 nginx containers

– 1 Openvswitch

Management:

– GUI browser webterm container

– 1 Openvswitch

– Cloud mapped to host interface tap0

External:

– Apache Benchmark (ab) container

– GUI browser webterm container

selection_353

Notice the BIG-IP VM interface e0, the one priorly bridged to host network, is now connected to a browser container for management.

I attached the host interface “tap0” to the management switch because, for some reasons, without it, arp doesn’t work on that segment.

 

Address configuration:

– Assign each of the nginx container an IP in subnet of your choice (ex: 192.168.10.0/24)

selection_354

In the same manner:

192.168.10.2/24 for ajnouri/nginx-2

192.168.10.3/24 for ajnouri/nginx-3

 

On all three nginx containers, start nginx and php servers:

service php5-fpm start

service nginx start

 

– Assign an IP to the management browser in the same subnet as BIG-IP management IP

192.168.0.222/24 for gns3/webterm-2

– Assign addresses default gateway and dns server to ab container and webterm-1 containers

selection_355

selection_356

And make sure both client devices resolve  ajnouri.local host to BIG-IP address 192.168.20.254

echo "192.168.20.254   ajnouri.local" >> /etc/hosts

– Openvswitch containers don’t need to be configured, it acts like a single vlan.

– Start the topology

 

4. Setting F5 Big-IP interfaces

 

To manage the load balancer from the webterm-2, open the console to the container, this will open a Firefox from the container .

selection_357

Browse the VM management IP https://192.168.0.151 and exception for the certificate and log in with F5 BigIP default credentials admin/admin.

selection_358

Go through the initial configuration steps

– You will have to set the hostname (ex: ajnouri.local), change the root and admin account passwords

selection_359

You will be logged out to take into account password changes, log in back

– For the purpose of this lab, not redundancy not high availability
selection_360

– Now you will have to configure internal (real servers) and external (client side) vlans and associated interfaces and self IPs.

(Self IPs are the equivalent of VLAN interface IP in Cisco switching)

Internal VLAN (connected to web servers):

selection_361

External VLAN (facing clients):

selection_362

5. Connectivity check between devices

 

Now make sure you have successful connectivity from each container to the corresponding Big-IP interface.

Ex: from ab container

selection_363

Ex: from nginx-1 server container

selection_364

selection_365

The interface connected to your host network will get ip parameters (address, gw and dns) from your dhcp server.

 

6. Load balancing configuration

 

Back to the browser webterm-2

For BIG-IP to load balance http requests from client to the servers, we need to configure:

  • Virtual Server: single entity (virtual server) visible to client0
  • Pool : associated to the Virtual server and contains the list of real web servers to balance between
  • Algorithm used to load balance between members of the pool

– Create a pool of web servers “Pool0” with “RoundRobin” as algorithm and http as protocol to monitor the members.

selection_366

-Associate to the virtual server “VServer0” to the pool “Pool0”

selection_367

Check the network map to see if everything is configured correctly and monitoring shows everything OK (green)

selection_368

From client container webterm-1, you can start a firefox browser (console to the container) and test the server name “ajnouri/local”

selection_369

If everything is ok, you’ll see the php page showing the real server ip used, the client ip and the dns name used by the client.

Everytime you refresh the page, you’ll see a different server IP used.

 

7. Performance testing

 

with Apache Benchmark container ajnouri/ab, we can generate client request to the load balancer virtual server by its hostname (ajnouri.local).

Let’s open an aux console to the container ajnouri/ab and generate 50.000 connections with 200 concurrent ones to the url ajnouri.local/test.php

ab -n 50000 -c 200 ajnouri.local/test.php

selection_370

 

8. Monitoring load balancing

 

Monitoring the load balancer performance shows a peek of connections corresponding to Apache benchmark generated requests

selection_371


In the upcoming part-2, the 3 web server containers are replaced with a single container in which we can spawn as many servers as we want (Docker-in-Docker) as well as test a custom python client script container that generates http traffic from spoofed IP addresses as opposed to a container  (Apache Benchmark) that generate traffic from a single source IP.

Advertisements

IOS server load balancing with mininet server farm


The idea is to play with IOS load balancing mechanism using large number of “real” servers (50 servers), and observe the difference in behavior between different load balancing algorithms.

Due to resource scarcity in the lab environment, I use mininet to emulate “real” servers.

I will stick to the general definition for load balancing:

A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications.

The publically announced IP address of the server is called Virtual IP (VIP). Behind the scene, the server services are provided not by a single server but a cluster of servers,real” servers with their real IP’s (RIP) hidden from outside world.

The Load Balancer, IOS SLB in our case, distributes user connections, sent to the VIP, to the real servers according to the load balancing algorithm.

Figure1: Generic load balancing

Figure1: Generic load balancing

 

Figure2: High-Level-Design Network topology

Figure2: High-Level-Design Network topology

 

Load balancing algorithms:

The tested load balancing algorithms are:

  • Weighted Round Robin (with equal weights for all real servers): New connections to the virtual SRV are directed to real servers equally in a circular fashion (Default weight = 8 for all servers).
  • Weighted Round Robin (with inequal weights): New connection to the virtual SRV are directed to real servers proportionally to their weights.
  • Weighted Least Connections: New connection to the virtual SRV are directed to real servers with the fewest number of active connections

 

Session redirection modes:

Dispatched NAT Virtual IP configured on ALL real servers as  loopback or secondary.Real servers are layer2-adjacent to SLB.SLB redirect traffic to real servers at MAC layer.
Directed VIP can be unknown to real servers.NO FTP/FW support.Support server NAT for ESP/GRE virtual servers.Use NAT to translate VIP => RIP.
Server NAT VIP translated to RIP and vice-versa.Real servers not required to be directly connected.
Client NAT Used for multiple SLBs.Replace client IP with one of the SLB IP to guarantee handling the returning traffic by the same SLB.
Static NAT Use static NAT for traffic from real server responding to clients Real servers (ex: in the same ethernet) use their own IP

The lab deploys Directed session redirection with server NAT.

 

IOS SLB configuration:

The configuration of load balancing in Cisco is IOS is pretty straightforward

Server Farm(Required) ip slb serverfarm <serverfarm-name>
Load-Balancing Algorithm (Optional) predictor [roundrobin | leastconns]
Real Server (Required) real <ip-address>
Enabling the Real Server for Service (Required) inservice
Virtual Server(Required) ip slb vserver virtserver-name
Associating a Virtual Server with a Server Farm (Required) serverfarm serverfarm-name
Virtual Server Attributes (Required)Specifies the virtual server IP address, type of connection, port number, and optional service coupling. virtual ip-address {tcp | udp} port-number [service service-name]
Enabling the Virtual Server for Service (Required) inservice

 

GNS3 lab topology

The lab is running on GNS3 with mininet VM and the host generating client traffic.

Figure3: GNS3 topology

Figure3: GNS3 topology

 

Building mininet VM server farm

mininet VM preparation:

  • Bridge and attach guest mininet VM interface to the SLB device.
  • Bring up the VM interface, without configuring any IP address.

Routing:

Because I am generating user traffic from the host machine, I need to configure static routing pointing to GNS3 subnets and the VIP:

&lt;/pre&gt;
&lt;pre&gt;sudo ip a a 192.168.10.121/24 dev tap2
sudo ip a a 192.168.20.0/24 via 192.168.10.201
sudo ip a a 66.66.66.66/32 via 192.168.10.201

mininet python API script:

The script builds mininet machines and set their default gateways to GNS3 IOS SLB device IP and start UDP server on port 5555 using netcat utility.

&lt;/pre&gt;
&lt;pre&gt;ip route add default via 10.0.0.254
nc -lu 5555 &amp;

Here is the python mininet API script:

https://github.com/AJNOURI/Software-Defined-Networking/blob/master/Mininet-Scripts/mininet-dc.py

#!/usr/bin/python

import re
from mininet.net import Mininet
from mininet.node import Controller
from mininet.cli import CLI
from mininet.link import Intf
from mininet.log import setLogLevel, info, error
from mininet.util import quietRun

def checkIntf( intf ):
&quot;Make sure intf exists and is not configured.&quot;
if ( ' %s:' % intf ) not in quietRun( 'ip link show' ):
error( 'Error:', intf, 'does not exist!\n' )
exit( 1 )
ips = re.findall( r'\d+\.\d+\.\d+\.\d+', quietRun( 'ifconfig ' + intf ) )
if ips:
error( 'Error:', intf, 'has an IP address and is probably in use!\n' )
exit( 1 )

def myNetwork():

net = Mininet( topo=None, build=False)

info( '*** Adding controller\n' )
net.addController(name='c0')

info( '*** Add switches\n')
s1 = net.addSwitch('s1')

max_hosts = 50
newIntf = 'eth1'

host_list = {}

info( '*** Add hosts\n')
for i in xrange(1,max_hosts+1):
host_list[i] = net.addHost('h'+str(i))
info( '*** Add links between ',host_list[i],' and s1 \r')
net.addLink(host_list[i], s1)

info( '*** Checking the interface ', newIntf, '\n' )
checkIntf( newIntf )

switch = net.switches[ 0 ]
info( '*** Adding', newIntf, 'to switch', switch.name, '\n' )
brintf = Intf( newIntf, node=switch )

info( '*** Starting network\n')
net.start()

for i in xrange(1,max_hosts+1):
info( '*** setting default gateway &amp; udp server on ', host_list[i], '\r' )
host_list[i].cmd('ip r a default via 10.0.0.254')
host_list[i].cmd('nc -lu 5555 &amp;')

CLI(net)
net.stop()

if __name__ == '__main__':
setLogLevel( 'info' )
myNetwork()

 

 

UDP traffic generation using scapy

I used scapy to emulate client connections from random IP addresses

Sticky connections:

Sticky connections are connections from the same client IP address or subnet and for a given period of time should be assigned to the same previous real server.

The sticky objects created to track client assignments are kept in the database for a period of time defined by sticky timer.

If both conditions are met : 

  • A connection for the same client already exists.
  • the amount of time between the end of a previous connection from the client and the start of the new connection is within the timer duration.

The server assigns the client connection to the same real server.

Router(config-slb-vserver)# sticky duration [group group-id]

A FIFO queue is used to emulate sticky connections. The process is triggered randomly.

If the queue is not full, the ramdomly generated source IP addresses is pushed to the queue, otherwise, an IP is pulled from the queue to be used, a second time, as source of the generated packet.

Figure4: Random Genetation of  sticky connections

Figure4: Random Genetation of sticky connections

 

https://github.com/AJNOURI/traffic-generator/blob/master/gen_udp_sticky.py

&lt;/pre&gt;
&lt;pre&gt;#! /usr/bin/env python

import random
from scapy.all import *
import time
import Queue

# (2014) AJ NOURI ajn.bin@gmail.com

dsthost = '66.66.66.66'

q = Queue.Queue(maxsize=5)

for i in xrange(1000):
rint = random.randint(1,10)
if rint % 5 == 0:
print '==&gt; Random queue processing'
if not q.full():
ipin = &quot;.&quot;.join(map(str, (random.randint(0, 255) for _ in range(4))))
q.put(ipin)
srchost = ipin
print ipin,' into the queue'
else:
ipout = q.get()
srchost = ipout
print ' *** This is sticky src IP',ipout
else:
srchost = &quot;.&quot;.join(map(str, (random.randint(0, 255) for _ in range(4))))
print 'one time src IP', srchost
#srchost = scapy.RandIP()
p = IP(src=srchost,dst=dsthost) / UDP(dport=5555)
print 'src= ',srchost, 'dst= ',dsthost
send(p, iface='tap2')
print 'sending packet\n'
time.sleep(1)

 

Randomly, the generated source IP used for the packet and in the same time pushed to the queue if it is not yet full:

one time src IP 48.235.35.122
src=  48.235.35.122 dst=  66.66.66.66
.
Sent 1 packets. 

one time src IP 48.235.35.122
src=  48.235.35.122 dst=  66.66.66.66
.
Sent 1 packets.
...

==&gt; Random queue processing
40.147.224.72  into the queue
src=  40.147.224.72 dst=  66.66.66.66
.
Sent 1 packets.

otherwise, an IP (previously generated) is pulled out from the queue and reused as source IP.

==&gt; Random queue processing
 *** This is sticky src IP 88.27.24.177
src=  88.27.24.177 dst=  66.66.66.66
.
Sent 1 packets.

Building Mininet server farm

ajn@ubuntu:~$ sudo python mininet-dc.py
[sudo] password for ajn:
Sorry, try again.
[sudo] password for ajn:
*** Adding controller
*** Add switches
*** Add hosts
*** Checking the interface eth1 1
*** Adding eth1 to switch s1
*** Starting network
*** Configuring hosts
h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16 h17 h18 h19 h20 h21
 h22 h23 h24 h25 h26 h27 h28 h29 h30 h31 h32 h33 h34 h35 h36 h37 h38 h39
 h40 h41 h42 h43 h44 h45 h46 h47 h48 h49 h50
*** Starting controller
*** Starting 1 switches
s1
*** Starting CLI:lt gateway &amp; udp server on h50
mininet&gt;

 

Weighted Round Robin (with equal weights):

 

IOS router configuration
ip slb serverfarm MININETFARM
 nat server
 real 10.0.0.1
 inservice
 real 10.0.0.2
 inservice
 real 10.0.0.3
 inservice
…
 real 10.0.0.50
 inservice
!
ip slb vserver VSRVNAME
 virtual 66.66.66.66 udp 5555
 serverfarm MININETFARM
 sticky 5
 idle 300
 inservice

 

Starting traffic generator
ajn:~/coding/python/scapy$ sudo python udpqueue.py
one time src IP 142.124.66.30
src= 142.124.66.30 dst= 66.66.66.66
.
Sent 1 packets.

sending packet
one time src IP 11.125.212.0
src= 11.125.212.0 dst= 66.66.66.66
.
Sent 1 packets.

sending packet
one time src IP 148.97.164.124
src= 148.97.164.124 dst= 66.66.66.66
.
Sent 1 packets.

sending packet
one time src IP 101.234.155.254
src= 101.234.155.254 dst= 66.66.66.66
.
Sent 1 packets.

sending packet
==&gt; Random queue processing
78.19.5.190 into the queue
src= 78.19.5.190 dst= 66.66.66.66
.
Sent 1 packets.

...

The router has already started associating incoming UDP connections to real server according to the LB algorithm.

Router IOS SLB
SLB#sh ip slb stick 

client netmask group real conns
-----------------------------------------------------------------------
43.149.57.102 255.255.255.255 4097 10.0.0.3 1
78.159.83.228 255.255.255.255 4097 10.0.0.3 1
160.130.143.14 255.255.255.255 4097 10.0.0.3 1
188.26.251.226 255.255.255.255 4097 10.0.0.3 1
166.43.203.95 255.255.255.255 4097 10.0.0.3 1
201.49.188.108 255.255.255.255 4097 10.0.0.3 1
230.46.94.201 255.255.255.255 4097 10.0.0.4 1
122.139.198.227 255.255.255.255 4097 10.0.0.3 1
219.210.19.107 255.255.255.255 4097 10.0.0.4 1
155.53.69.23 255.255.255.255 4097 10.0.0.3 1
196.166.41.76 255.255.255.255 4097 10.0.0.4 1
…
Result: (accelerated video)

Weighted Round Robin (with unequal weights):

Let’s suppose we need to assign a weight of 16, twice the default weight, to each 5th server: 1, 5, 10, 15…

 

IOS router configuration
ip slb serverfarm MININETFARM
 nat server
 real 10.0.0.1
 weight 16
 inservice
 real 10.0.0.2
 inservice
 real 10.0.0.3
 inservice
 real 10.0.0.4
 inservice
 real 10.0.0.5
 weight 16
…
Result: (accelerated video)

Least connection:

 

IOS router configuration

ip slb serverfarm MININETFARM
 nat server
 predictor leastconns
 real 10.0.0.1
 weight 16
 inservice
 real 10.0.0.2
 inservice
 real 10.0.0.3
…
Result: (accelerated video)

 

Stopping Mininet Server farm
mininet&gt; exit
*** Stopping 1 switches
s1 ..................................................
*** Stopping 50 hosts
h1 h2 h3 h4 h5 h6 h7 h8 h9 h10 h11 h12 h13 h14 h15 h16 h17 h18 h19 h20
 h21 h22 h23 h24 h25 h26 h27 h28 h29 h30 h31 h32 h33 h34 h35 h36 h37
 h38 h39 h40 h41 h42 h43 h44 h45 h46 h47 h48 h49 h50
*** Stopping 1 controllers
c0
*** Done
ajn@ubuntu:~$

References
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/slb/configuration/15-s/slb-15-s-book.html

%d bloggers like this: