BGP best path selection

The complexity and the efficiency of BGP reside in the concept of route “attributes” and the way the protocol juggles them to determine the best path.

This is a quick guide (refresh of an old article), still very actual for those dealing with BGP design.
I hope the following Cisco BGP best path selection diagram will be of help:



Download the pdf version here

ELK series: Monitoring MySQL database with ELK stack

In an effort to diversify the blog content, I am introducing new series about other technologies than Cisco, that make the life of a network engineer easier.
These technologies include but not limited to Juniper, logging analysis with ELK stack, Docker swarm, Kubernetes, Rancher, DevOps, Public Clouds (AWS, GCP…), Linux, Python programming, etc…


In this post we will see how to deploy ELK stack using docker-compose to analyse MySQL container database running under Rancher.

If you find it overwhelming to combine all these technologies in a single lab, don’t be afraid and try them individually in small steps.



In this lab we will be using many technologies, each of them could take a whole tutorial, nevertheless, a basic prior knowledge of the below topics will help you take advantage of this post:

    • Basic knowlege of Docker: running containers, docker-compose, swarm clustering and service deployment.
    • Basic knowlege of mysql and mysql queries.
    • Basic knowlege of ELK stack (Elasticsearch, Logstash and Kibana).


Note: This lab uses temporary cloud resources, so all mentioned public IPs are not available anymore.



  • Rancher
    • Starting Rancher
    • Adding cluster nodes
  • MySQL db
    • Starting mysql service
    • Loading a fake mysql database
  • ELK stack
    • Running ELK stack containers
    • Configuring the pipeline on logstash
    • Analysing data with Kibana


Let’s start by defining what is ELK stack:

  • Elasticsearch: a search and analytics engine.
  • Logstash: performs data processing pipeline (input->parse(filter)->output) that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite search and analytics engine.
  • Kibana: allows to visualize Elasticsearch data.


Starting Rancher

Rancher is a Docker cluster orchestrator. With rancher is is easier to manager cluster nodes, service deployment, upgrade and scheduling.

All you have to do is start rancher container. That’s it!

docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable

As soon as you are logged in, make sure to set login and a password:


Adding cluster nodes

Rancher supports many cloud provider drivers. In our case, we will choose “Custom” deployment because I already have two scaleway machines.

So All we have to do to add a new node is to add any eventual labels (for futur scheduling) and copy the generated command, which is nothing but rancher agent container that needs to be run on the node:


SSH to each node using your private key and pate the command to make the node join the cluster

root@scw-e9ba84:~# sudo docker run -e CATTLE_HOST_LABELS='provider=scaleway&type=test' --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.9
Unable to find image 'rancher/agent:v1.2.9' locally
v1.2.9: Pulling from rancher/agent
b3e1c725a85f: Pull complete 
6a710864a9fc: Pull complete 
d0ac3b234321: Pull complete 
87f567b5cf58: Pull complete 
063e24b217c4: Pull complete 
d0a3f58caef0: Pull complete 
16914729cfd3: Pull complete 
dc5c21984c5b: Pull complete 
d7e8f9784b20: Pull complete 
Digest: sha256:c21255ac4d94ffbc7b523f870f2aea5189b68fa3d642800adb4774aab4748e66
Status: Downloaded newer image for rancher/agent:v1.2.9

INFO: Running Agent Registration Process, CATTLE_URL=
INFO: Attempting to connect to:
INFO: is accessible
INFO: Inspecting host capabilities
INFO: Boot2Docker: false
INFO: Host writable: true
INFO: Token: xxxxxxxx
INFO: Running registration
INFO: Printing Environment
INFO: ENV: CATTLE_HOME=/var/lib/cattle
INFO: ENV: CATTLE_HOST_LABELS=provider=scaleway&type=test
INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.9
INFO: Launched Rancher Agent: fa0ca3d03e7946b57a0c5b1b7f9042553e2d7d5ef47aba369822fae5eb28d751

Now you can see both nodes in Rancher:


Starting mysql service

Now that the nodes has joined the cluster, create a stack, and configure mysql service






wait a couple of seconds until the service is UP


The service will be deployed on one of the nodes:


Check mysql service availability:

nc -zv 3306
Connection to 3306 port [tcp/mysql] succeeded!

Loading a fake mysql database

Using the following script, fill the mysql server with the fake database:

mysql -h -u root -p"passwd" -Bse "create database employees;"
mysql -h -u root -p"passwd" -Bse "create database employees;" -u root -p"passwd" -Bse "show databases;"
mysql -h -u root -p"passwd" "employees" < "employees.sql"
$ ./ 
ERROR 1007 (HY000) at line 1: Can't create database 'employees'; database exists
storage engine: InnoDB
LOADING departments
LOADING employees
LOADING dept_emp
LOADING dept_manager
LOADING titles
LOADING salaries

Running ELK stack containers

In a new directory create “data” and “config-dir” and “plugin” sub-directrories and create the following docker-compose.yaml file:

$ mkdir data && mkdir config-dir && mkdir plugin
$ cat docker-compose.yml 
version: '3'
   image: elasticsearch
     - "9200:9200"
     - "9300:9300"
     - ./data:/usr/share/elasticsearch/data
     context: .
     dockerfile: Dockerfile
     - ./config-dir:/mylogstash/config-dir:rw
     - ./data:/mylogstash/data:rw
     - ./plugin:/mylogstash/plugin:rw
     - "5000:5000"
   tty: true
   command: ["bash"]
     - elasticsearch
   image: kibana
     - "5601:5601"
     - elasticsearch

run docker-compose:

$ docker-compose up -d
Creating network "elkmysql_default" with the default driver
Creating elkmysql_elasticsearch_1 ... done
Creating elkmysql_mylogstash_1 ... done
Creating elkmysql_kibana_1 ... done

Check the running containers:

 docker-compose ps
 Name Command State Ports 
elkmysql_elasticsearch_1 / elas ... Up>9200/tcp,>9300/tcp
elkmysql_kibana_1 / kibana Up>5601/tcp 
elkmysql_mylogstash_1 / bash Up>5000/tcp

Configuring the pipeline on Logstash

Connect to logstash container:

$ docker exec -ti elkmysql_mylogstash_1 bash

Download the plugin file “mysql-connector-java-5.1.23-bin.jar”  into “plugin” directory

root@00e9048cbdec:/# cd mylogstash/plugin/
root@00e9048cbdec:/mylogstash/plugin# wget
--2018-04-28 14:49:40--
Resolving (
Connecting to (||:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 806066 (787K) [application/zip]
Saving to: ‘’

mysql-connector-java-5.1.23-bin.j 100%[============================================================>] 787.17K 452KB/s in 1.7s

2018-04-28 14:49:42 (452 KB/s) - ‘’ saved [806066/806066]

root@00e9048cbdec:/mylogstash/plugin# ls
root@00e9048cbdec:/mylogstash/plugin# unzip 
 inflating: mysql-connector-java-5.1.23-bin.jar 
root@00e9048cbdec:/mylogstash/plugin# rm *.zip

Create a logstash pipeline configuration file (ex: mysql.conf)  inside “config-dir”

root@00e9048cbdec:/# cat mylogstash/config-dir/mysql.conf
input { 
  jdbc {  
    jdbc_driver_library => "/mylogstash/plugin/mysql-connector-java-5.1.23-bin.jar" 
    jdbc_driver_class => "com.mysql.jdbc.Driver" 
    jdbc_connection_string => "jdbc:mysql://" 
    jdbc_user => "root" 
    jdbc_password => "passwd" 
    statement => "SELECT * FROM employees" 
    schedule => "* * * * *" jdbc_paging_enabled => "true" 
    jdbc_page_size => "50000" 
filter {}    
output {
  elasticsearch {
    hosts => [ "elasticsearch:9200" ]        
    "document_type" => "data"              
    "index" => "fakemysql"        
    http_compression => true    

For the sake of simplicity, no filter is configured, but in production, to optimize the data analysis and queries, you’ll need to structure the data using filters.

Start logstash with the configuration file “mysql.conf”

root@00e9048cbdec:/# logstash -f /mylogstash/config-dir/mysql.conf --config.reload.automatic


After a couple of seconds, you’ll start to see logstash fetching database data by chunks of 5000:

 INFO logstash.inputs.jdbc - (0.006000s) SELECT version()
14:53:00.982 [Ruby-0-Thread-18: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:283] INFO logstash.inputs.jdbc - (0.178000s) SELECT count(*) AS `count` FROM (SELECT * FROM employees) AS `t1` LIMIT 1
14:53:01.124 [Ruby-0-Thread-18: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:283] INFO logstash.inputs.jdbc - (0.140000s) SELECT * FROM (SELECT * FROM employees) AS `t1` LIMIT 50000 OFFSET 0
14:53:31.619 [Ruby-0-Thread-18: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:283] INFO logstash.inputs.jdbc - (0.165000s) SELECT * FROM (SELECT * FROM employees) AS `t1` LIMIT 50000 OFFSET 50000
14:53:57.572 [Ruby-0-Thread-18: /usr/share/logstash/vendor/bundle/jruby/1.9/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:283] INFO logstash.inputs.jdbc - (0.215000s) SELECT * FROM (SELECT * FROM employees) AS `t1` LIMIT 50000 OFFSET 100000

Analysing data with Kibana

Browse Kibana web interface at


You’re prompt to indicate the index pattern the data you’re looking for.

Remember logstash configuration  “index” => “fakemysql”. Enter the entire word or use a wildcard.

Kibana will start showing every field in the index :


Elasticsearch starts receiving the index and showing them in discover section


And after a couple of minutes:


You can already play with the data in the Visualize section:

For example, visualize gender percentages among employees:


After reading the entire database we need to fetch only new informations, in logstash jdbc plugin, replace the mysql query

statement => "SELECT * FROM employees"


statement => "SELECT" * FROM employees where Date > :sql_last_value order by Date"


More ELK stack labs to come …

Stay tuned!


Testing Docker networking with GNS3, Part1: MaCVLAN


MacVLAN allows to connect containers in separate docker networks to your VLAN infrastructure, so they act like being directly connected to your network.

From the main interface, MacVLAN driver creates subinterfaces to handle 802.1q tags for each VLAN, and assign to them separate IP and MAC addresses.

Because the main interface (with its own MAC) has to accept traffic toward subinterfaces (with their own MACs), Docker network driver MacVLAN requires Docker host interface to be in promiscuous mode.

Knowing that, most cloud providers (aws, azure…) do not allow promiscuous mode, you’ll be deploying MACVLAN on your own premises.

MacVLAN network characteristics:

Creates subintefaces to process VLAN tags
Assign different IP and MAC addresses to each subinterface
Requires the main Docker host intreface to function in promiscuous mode to accept traffic not destined to main interface MAC.
The admin needs to carefully assign ranges of IP’s to VLANs in different Docker nodes in harmony with an eventual DHCP range used by the existing VLANs

Conceptual diagram: Docker node connected to the topology


Conceptual diagram: MACVLAN configured


Conceptual diagram: Logical functioning of MACVLAN


Purpose of this lab:

  • To test and get hands on practice with Docker MACVLAN.
  • It is easy to deploy complex topologies in GNS3 using a meriad of virtual appliances  Building a topology is as easy as dragging devices and drawing the connections betweeen them.


It is better to have some basic practical knowledge of docker containers.

1- GNS3 topology:


Devices used:

  • Two VMWare Virtual machines for docker nodes, imported into GNS3.
  • Two OpenvSwitch containers gns3/openvswitch. Import & Insert
  • Ansible container ajnouri/ansible used as SSH client to manage Docker nodes. In another post I’ll be showing how to use it to provision package installation to any device (ex: Docker instalaltion to VMWare nodes).
  • Cisco IOSv 15.6(2)T: Route-on-a-stick used to route traffic from each vlan to outside world (PAT) and deploy communication policy between VLANs. Import & Insert


Atually importing a container into GNS3 is very easy and intuitive, here is a video from David Bomball explaining the process.

  • Create two VMWare Ubuntu xenial LTS servers to be used as Docker nodes, with 1Gig RAM and 2 interfaces.

  • Install Docker min 1.12 (latest recommended).

Here is the script if you want to automate the deployment of Docker, for example from an ansible container, like shown in this GNS3 container series (managing GNS3 appliances with Ansible).

 ### Install GNS3
 sudo add-apt-repository ppa:gns3/ppa
 sudo apt-get update
 sudo apt-get install -y gns3-gui
 sudo apt-get install -y gns3-server# Add Oficial docker repository GPG signature
### Install Docker
 curl -fsSL | sudo apt-key add -# Add apt repository sources
 sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
 sudo apt-get update

# Install Docker CE
 sudo apt-get install -y docker-ce# Docker without sudo
 sudo groupadd docker
 sudo gpasswd -a $USER docker
 $docker version
Version:      17.03.1-ce
API version:  1.27
Go version:   go1.7.5
Git commit:   c6d412e
Built:        Mon Mar 27 17:14:09 2017
OS/Arch:      linux/amd64
Version:      17.03.1-ce
API version:  1.27 (minimum version 1.12)
Go version:   go1.7.5
Git commit:   c6d412e
Built:        Mon Mar 27 17:14:09 2017
OS/Arch:      linux/amd64
Experimental: false

Docker node interfaces:

Main interface: e0/0 (Ubuntu: ens33), a trunk interface used to connect containers to your network VLANs

Management interface: e1/0 (ubuntu: ens38) connected to the common VLAN114

Interface configuration (/etc/netwoork/interfaces):
# The primary network interface
auto ens33
iface ens33 inet manual
auto ens38
iface ens38 inet static
netmask 24
up echo "nameserver" > /etc/resolv.conf

# autoconfigured IPv6 interfaces
iface ens33 inet6 auto
iface ens38 inet6 auto

Promiscuous mode:

Without proomiscuous mode, containers will not be able to communicate with hosts outside of docker node, because the main interface (connected to the VLAN network) will not accept traffic to other MAC addresses (those of MACVLAN)

Promiscuous mode is configured in two steps:

Configuring Promiscuous mode on VMWare guest:

Add the below command to /etc/rc.local

ifconfig ens33 up promisc

Check for the letter “P” for Promiscuous

netstat -i
Kernel Interface table
docker0    1500 0         0      0      0 0             0      0      0      0 BMU
ens33      1500 0        25      0      0 0            32      0      0      0 BMPRU
ens38      1500 0        55      0      0 0            60      0      0      0 BMRU
ens33.10   1500 0         0      0      0 0             8      0      0      0 BMRU
ens33.20   1500 0         0      0      0 0             8      0      0      0 BMRU
ens33.30   1500 0         0      0      0 0             8      0      0      0 BMRU
lo        65536 0       160      0      0 0           160      0      0      0 LRU
Authorizations for Promiscuous mode on VMWare host:

By default, VMWare interfaces will not function in promiscuous mode because a regular user will not have write access to /dev/vmnet* files.

So, Create a special group, include the user running vmware in the group and allow th rgroup to have right access to /dev/vmnet* files :

sudo groupadd promiscuous
sudo usermod -a -G promiscuous $USER
chgrp promiscuous /dev/vmnet*
chmod g+rw /dev/vmnet*

Or simply give right access to everyone:

chmod a+rw /dev/vmnet*

For permanent change, put it in  /etc/init.d/vmware file as follow:

vmwareStartVmnet() {
 vmwareLoadModule $vnet
 "$BINDIR"/vmware-networks --start >> $VNETLIB_LOG 2>&1
 chmod a+rw /dev/vmnet*

GNS3 VLAN topology:

For each Docker node, connect the first interface to OpenVswitch1 trunk interface and the second interface to a VLAN interface 114.
VLAN 114 is a common VLAN used to reach and manage all other devices.

GNS3 integrates Docker, so you can use containers as simple endhost devices (independently of docker network drivers):

  • gns3/openvswitch container: Simple L2 switch
  • gns3/webterm container: GUI Firefox browser (no need for entire VM for that)
  • ajnouri/ansible container: the management endhost used to access Docker nodes thourgh SSH. In subsequent lab, I’ll be showing how top manage GNS3 devices from this Ansible container.

Docker MACVLAN network allows to connect your containers to an existing network vlans seamlessly as they were directly connected to your VLAN infrstustructre.

The network is deploying three isolated VLANs (id: 10, 20 and 30) and vlan id 114 able to communicate with all three VLANs through a router on a stick (Cisco IOSv 15.6T).

MacVLAN generates subinterfaces (.) to process (tag/untag) traffic.

The parent (main) interface will act as a trunk interface carrying all vlans from “children” interfaces, so the network switch interface linked to it should be a trunk port.

OpenVswitch1 ports:

First, let’s clean the configuration and then reintroduce trunk and vlan ports:

for br in `ovs-vsctl list-br`; do ovs-vsctl del-br ${br}; done

#Trunk ports:
ovs-vsctl add-port br0 eth1
ovs-vsctl add-port br0 eth2
ovs-vsctl add-port br0 eth6

#vlan ports:
ovs-vsctl add-port br0 eth2 tag=114
ovs-vsctl add-port br0 eth4 tag=114
ovs-vsctl add-port br0 eth7 tag=114
ovs-vsctl show
Bridge "br0"
Port "eth1"
Interface "eth1"

In OVS, untagged ports acts as trunk.

OpenVswitch2 ports:

Openvswitch 2 connects the two management endhosts, Ansible container and Firefox browser container.

for br in `ovs-vsctl list-br`; do ovs-vsctl del-br ${br}; done

#Trunk ports:
ovs-vsctl add-port br0 eth7

#vlan ports:
ovs-vsctl add-port br0 eth0 tag=114
ovs-vsctl add-port br0 eth1 tag=114

For more information on how to configure advanced switching features with ovs, please refer to my gns3 blog post on gns3 community.

Cisco router-on-a-stick configuration:

This router is used to allow inter-vlan communications between VLAN114 and all other VLANs, deny communications between VLANs 10,20 and 30, and connect the entire topolgy to Internet using PAT (Port Address Translation ~ Linux MASQUERADING).

ROAS#sh run
Building configuration...Current configuration : 6093 bytes
version 15.6
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
hostname ROAS
logging buffered 1000000
no aaa new-model
ethernet lmi ce
mmi polling-interval 60
no mmi auto-configure
no mmi pvc
mmi snmp-timeout 180
no ip domain lookup
ip cef
no ipv6 cef
multilink bundle-name authenticated
crypto pki trustpoint TP-self-signed-4294967295
enrollment selfsigned
subject-name cn=IOS-Self-Signed-Certificate-4294967295
revocation-check none
rsakeypair TP-self-signed-4294967295
crypto pki certificate chain TP-self-signed-4294967295
certificate self-signed 01
3082022B 30820194 A0030201 02020101 300D0609 2A864886 F70D0101 05050030
31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274
69666963 6174652D 34323934 39363732 3935301E 170D3137 30363138 31313034
31345A17 0D323030 31303130 30303030 305A3031 312F302D 06035504 03132649
4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D34 32393439
36373239 3530819F 300D0609 2A864886 F70D0101 01050003 818D0030 81890281
8100B306 1D16E9A7 67E556AD A2A5DEF2 4914C183 5C6B5C7B 9A37CE29 A53F61BB
6FED6E2C 3E4E8E67 355560A7 818590CC 4410B87B 72126999 465A45D4 4627F5DC
185E545B 492840DA A8DB88B3 AC8DBE34 D3109B8D AD4A5522 6C7325E6 405DE12B
91B30192 64AC93BB 618FADB8 2F6F94E0 779B80FF 5002DEA0 1AD6F6D0 5C289790
95590203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF 301F0603
551D2304 18301680 14BF7E97 AE5F2D93 86F08CF4 ED9C8FF0 E92C5D8E D3301D06
03551D0E 04160414 BF7E97AE 5F2D9386 F08CF4ED 9C8FF0E9 2C5D8ED3 300D0609
2A864886 F70D0101 05050003 818100A3 76F489B3 BF33FA87 8E4DD1B5 85913A54
428FB7F2 1D1FDF3E 6D18E3B3 CE0F9400 C574B89C A2D7E89E 7F13AA3F BB4F9B19
10490BF7 4F7C0B3C 70516F75 5C26078F 6A4A14A3 370B63EC 76376758 1B614B98
B4A4FF1D 1B4F7C88 60BFAF98 AF822BB5 DCF6FA16 A31DAD0D 89F53E60 24305110
64839C15 1865D92A D8153B73 8FB8C1
interface GigabitEthernet0/0
no ip address
duplex full
speed auto
media-type rj45
interface GigabitEthernet0/0.10
encapsulation dot1Q 10
ip address
ip access-group 101 in
ip access-group 101 out
ip nat inside
ip virtual-reassembly in
interface GigabitEthernet0/0.20
encapsulation dot1Q 20
ip address
ip access-group 101 in
ip access-group 101 out
ip nat inside
ip virtual-reassembly in
interface GigabitEthernet0/0.30
encapsulation dot1Q 30
ip address
ip access-group 101 in
ip access-group 101 out
ip nat inside
ip virtual-reassembly in
interface GigabitEthernet0/0.114
encapsulation dot1Q 114
ip address
ip nat inside
ip virtual-reassembly in
interface GigabitEthernet0/1
ip address
duplex full
speed auto
media-type rj45
interface GigabitEthernet0/2
no ip address
duplex auto
speed auto
media-type rj45
interface GigabitEthernet0/3
ip address dhcp
ip nat outside
ip virtual-reassembly in
duplex full
speed auto
media-type rj45
ip forward-protocol nd
ip http server
ip http authentication local
ip http secure-server
ip nat inside source list 100 interface GigabitEthernet0/3 overload
ip ssh rsa keypair-name
ip ssh version 2
access-list 100 permit ip any
access-list 100 permit ip any
access-list 100 permit ip any
access-list 100 permit ip any
access-list 101 deny ip
access-list 101 deny ip
access-list 101 deny ip
access-list 101 deny ip
access-list 101 deny ip
access-list 101 permit ip any
access-list 101 permit ip any
access-list 101 permit ip any any
banner exec ^C
* IOSv is strictly limited to use for evaluation, demonstration and IOS *
* education. IOSv is provided as-is and is not supported by Cisco's *
* Technical Advisory Center. Any use or disclosure, in whole or in part, *
* of the IOSv Software or Documentation to any third party for any *
* purposes is expressly prohibited except as otherwise authorized by *
* Cisco in writing. *
banner incoming ^C
* IOSv is strictly limited to use for evaluation, demonstration and IOS *
* education. IOSv is provided as-is and is not supported by Cisco's *
* Technical Advisory Center. Any use or disclosure, in whole or in part, *
* of the IOSv Software or Documentation to any third party for any *
* purposes is expressly prohibited except as otherwise authorized by *
* Cisco in writing. *
banner login ^C
* IOSv is strictly limited to use for evaluation, demonstration and IOS *
* education. IOSv is provided as-is and is not supported by Cisco's *
* Technical Advisory Center. Any use or disclosure, in whole or in part, *
* of the IOSv Software or Documentation to any third party for any *
* purposes is expressly prohibited except as otherwise authorized by *
* Cisco in writing. *
line con 0
line aux 0
line vty 0 4
privilege level 15
transport input telnet ssh
no scheduler allocate

Now you can scale your infrastructure by adding any numbe of new docker nodes.

2- Configuring MACVLAN network on docker node1

1) Create MacVLAN networks

Create MacVLAN networks with the following parameters:

  • type=MacVLAN
  • subnet & ip range from which the container will get their IP parameters
  • Gateway of the VLAN in question
  • parent interface
  • MacVLAN network name
docker network create -d macvlan \
--subnet \
--ip-range= \
--gateway= \
-o parent=ens33.10 macvlan10

docker network create -d macvlan \
--subnet \
--ip-range= \
--gateway= \
-o parent=ens33.20 macvlan20

docker network create -d macvlan \
--subnet \
--ip-range= \
--gateway= \
-o parent=ens33.30 macvlan30

List created the created subinterfaces with “ip a”

List docker networks with “docker network ls” and make sure the three macvlans are created

docker network  ls
NETWORK ID          NAME                DRIVER              SCOPE
916165cd344c        bridge              bridge              local
686ebb8c5399        host                host                local
b3c9487a6cd0        macvlan10           macvlan             local
e1818c46a437        macvlan20           macvlan             local
52ce778548c3        macvlan30           macvlan             local
d97f45467edd        none                null                local

as an example, let’s inspect docker network macvlan10

docker network inspect macvlan10
docker network inspect macvlan10
 “Name”: “macvlan10”,
 “Id”: “b3c9487a6cd09054f06e22cf04181473819236d06245710f3763489a326770d2”,
 “Created”: “2017-06-20T14:36:02.581834167+02:00”,
 “Scope”: “local”,
 “Driver”: “macvlan”,
 “EnableIPv6”: false,
 “IPAM”: {
 “Driver”: “default”,
 “Options”: {},
 “Config”: [
 “Subnet”: “”,
 “IPRange”: “”,
 “Gateway”: “”
 “Internal”: false,
 “Attachable”: false,
 “Containers”: {},
 “Options”: {
 “parent”: “ens33.10”
 “Labels”: {}

docker network inspect macvlan10



"Name": "macvlan10",

"Id": "b3c9487a6cd09054f06e22cf04181473819236d06245710f3763489a326770d2",

"Created": "2017-06-20T14:36:02.581834167+02:00",

"Scope": "local",

"Driver": "macvlan",

"EnableIPv6": false,

"IPAM": {

"Driver": "default",

"Options": {},

"Config": [


"Subnet": "",

"IPRange": "",

"Gateway": ""




"Internal": false,

"Attachable": false,

"Containers": {},

"Options": {

"parent": "ens33.10"


"Labels": {}



Notice that, no containers are attached to the network:   “Containers”: {},

Let’s remediate to that by running simple apache containers from a custom image ajnouri/apache_ssl_container image (You can use other appropriate image with “bash/sh” running on the console) and connect them respectively to macvlan10, macvlan20 and macvlan30.

2) start and connect containers to MacVLAN networks

docker run --net=macvlan10 -dt --name c11 --restart=unless-stopped ajnouri/apache_ssl_container
docker run --net=macvlan20 -dt --name c12 --restart=unless-stopped ajnouri/apache_ssl_container
docker run --net=macvlan30 -dt --name c13 --restart=unless-stopped ajnouri/apache_ssl_container

The first time, docker will download the image, then any container from that image is created instantly.

docker run”  command options:

  • –net=macvlan10 : macvlan name
  • -dt : run a console on the background
  • –name c11: container name
  • –restart=unless-stopped : if Docker host restart, containers are started & connected to their networks, except if they are intentionally stopped.
  • ajnouri/apache_ssl_container : custom container with Apache SSL installed & small php script to detect session ip addresses

List running containers with “docker ps

$ docker ps
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS               NAMES
1a2104d84519        ajnouri/apache_ssl_container   "/bin/sh -c 'servi..."   6 days ago          Up 14 minutes                           c13
691b468918ee        ajnouri/apache_ssl_container   "/bin/sh -c 'servi..."   6 days ago          Up 14 minutes                           c12
1e2bb1933d10        ajnouri/apache_ssl_container   "/bin/sh -c 'servi..."   6 days ago          Up 14 minutes   

And inspect macvlan attached containers with “docker network inspect macvlan10

$ docker network inspect macvlan10
"Name": "macvlan10",
"Id": "b3c9487a6cd09054f06e22cf04181473819236d06245710f3763489a326770d2",
"Created": "2017-06-20T14:36:02.581834167+02:00",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
"Subnet": "",
"IPRange": "",
"Gateway": ""
"Internal": false,
"Attachable": false,
"Containers": {
"1e2bb1933d10f94f2aeb3e83deb4f141d393fc6cfbf09e415ebd1239e421b50f": {
"Name": "c11",
"EndpointID": "e2bc0ea1d3cf9806cf880e6cdb34e4914d84a75e1eabc90a8429f7f7668f82b7",
"MacAddress": "02:42:0a:00:00:01",
"IPv4Address": "",
"IPv6Address": ""
"Options": {
"parent": "ens33.10"
"Labels": {}

Let’s inspect macvlan networks again for connected containers with "docker network inspect macvlan10"

Container c11 is connected to macvlan10 ( and got dynamically an ip from that range.

3) check connectivity

Now let’s do some connectivity checks inside container c11 (macvlan10) and see if it can reach its gateway (Router ona stick) outside docker host.

ajn@ubuntu:~$ docker exec c11 ping -t3
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=1.07 ms
64 bytes from icmp_seq=2 ttl=255 time=1.50 ms
64 bytes from icmp_seq=3 ttl=255 time=1.27 ms
64 bytes from icmp_seq=4 ttl=255 time=1.40 ms
64 bytes from icmp_seq=5 ttl=255 time=1.29 ms

ajn@ubuntu:~$ docker exec c12 ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=2.22 ms
64 bytes from icmp_seq=2 ttl=255 time=1.34 ms
64 bytes from icmp_seq=3 ttl=255 time=1.23 ms
64 bytes from icmp_seq=4 ttl=255 time=1.41 ms

ajn@ubuntu:~$ docker exec c13 ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=1.76 ms
64 bytes from icmp_seq=2 ttl=255 time=1.36 ms
64 bytes from icmp_seq=3 ttl=255 time=1.42 ms
64 bytes from icmp_seq=4 ttl=255 time=1.51 ms
64 bytes from icmp_seq=5 ttl=255 time=1.39 ms


And can even reach Internet, thanks to router-on-stick

docker exec c11 ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=51 time=2.54 ms
64 bytes from icmp_seq=2 ttl=51 time=2.82 ms
64 bytes from icmp_seq=3 ttl=51 time=2.62 ms
64 bytes from icmp_seq=4 ttl=51 time=2.82 ms

And that’s not all, it can even reach other containers, if you allow it of course. Cisco router used for that purpose, you can play with access control lists to implement the policy you want.

3- Configuring MACVLAN network on docker node2

The same steps are applied to Docker node 2. Interfaces are connected in the same way as node1: one interface to common vlan 114 and another trunk interface to create c21, c22 and C23 connected respectively to MacVLANs macvlan10, macvlan20 and macvlan30 (same as node1).

Node2 used different ip ranges than those used for each node1 VLAN:

VLAN subnets Node1 Node2
macvlan10 (
macvlan20 (
macvlan30 (

1) Create MacVLAN networks

docker network create -d macvlan \
--subnet \
--ip-range= \
--gateway= \
-o parent=ens33.10 macvlan10

docker network create -d macvlan \
--subnet \
--ip-range= \
--gateway= \
-o parent=ens33.20 macvlan20

docker network create -d macvlan \
--subnet \
--ip-range= \
--gateway= \
-o parent=ens33.30 macvlan30

2) start and connect containers to MacVLAN networks

docker run --net=macvlan10 -dt --name c21 --restart=unless-stopped ajnouri/apache_ssl_container
docker run --net=macvlan20 -dt --name c22 --restart=unless-stopped ajnouri/apache_ssl_container
docker run --net=macvlan30 -dt --name c23 --restart=unless-stopped ajnouri/apache_ssl_container

3) check connectivity

$ docker exec c22 ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=1.06 ms
64 bytes from icmp_seq=2 ttl=255 time=1.51 ms
64 bytes from icmp_seq=3 ttl=255 time=1.46 ms
$ docker exec c23 ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=2.11 ms
64 bytes from icmp_seq=2 ttl=255 time=1.42 ms
64 bytes from icmp_seq=3 ttl=255 time=1.56 ms

And according to the deployed policy on the router inter-vlan communication is not allowed

$ docker exec c23 ping
PING ( 56(84) bytes of data.
From icmp_seq=1 Packet filtered
From icmp_seq=2 Packet filtered
From icmp_seq=3 Packet filtered

Now let’s check communication between c11 (node1: macvlan10) and c21 (node2: macvlan10):

docker exec c21 ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=1.45 ms
64 bytes from icmp_seq=2 ttl=64 time=0.879 ms
64 bytes from icmp_seq=3 ttl=64 time=0.941 ms


Containers connected to the same network VLAN (from different docker nodes) talk to each other.

And to finish, from the GUI browser container gns3/webterm , let’s access all containers in both nodes… Yes, in GNS3 you can run Firefox in a container, no need for an entire VM for that :P-)



Without Promiscuous mode you’ll notice that container traffic reaches the outside network, but not the other way around, as shown below in the wireshark capture:

  • router
  • behing MACVLAN



Deploying F5 BIG-IP LTM VE within GNS3 (part-1)

One of the advantages of deploying VMware (or VirtualBox) machines inside GNS3, is the available rich networking infrastructure environment. No need to hassle yourself about interface types, vmnet or private? Shared or ad-hoc?

In GNS3 it is as simple and intuitive as choosing  a node interface and connect it to whatever other node interface.

In this lab, we are testing basic F5 BIG-IP LTM VE deployment within GNS3. The only Virtual machine used in this lab is F5 BIG-IP all other devices are docker containers: 

  • Nginx Docker containers for internal web servers.
  • ab(Apache Benchmark) docker container for the client used for performence testing.
  • gns3/webterm containers used as Firefox browser for client testing and F5 web management.



  1. Docker image import
  2. F5 Big-IP VE installation and activation
  3. Building the topology
  4. Setting F5 Big-IP interfaces
  5. Connectivity check between devices
  6. Load balancing configuration
  7. Generating client http queries
  8. Monitoring Load balancing

Devices used:


  • Debian host GNU/Linux 8.5 (jessie)
  • GNS3 version 1.5.2 on Linux (64-bit)

System requirements:

  • F5 Big IP VE requires 2GB of RAM (recommended >= 8GB)
  • VT-x / AMD-V support

The only virtual machine used in the lab is F5 Big-IP, all other devices are Docker containers.


1.Docker image import

Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers  and then “New”.

Choose “New image” option and type the Docker image in the format provided in “devices used” section <account/<repo>>, then choose a name (without the slash “/”).


By default GNS3 derives a name in the same format as the Docker registry (<account>/repository) which can cause an error in some earlier versions. In the latest GNS3 versions, the slash “/” is removed from the derived name.



Set the number of interfaces to eight and accept default parameters with “next” until “finish”.

– Repeat the same procedure for gns3/webterm


Choose a name for the image (without slash “/”)


Choose vnc as the console type to allow Firefox (GUI) browsing


And keep the remaining default parameters.

– Repeat the same procedure for the image ajnouri/nginx.

Create a new image with name ajnouri/nginx


Name it as you like


And keep the remaining default parameters.

2. F5 Big-IP VE installation and activation


– From F5 site, import the file BIG- Go to


You’ll have to request a registration key for the trial version that you’ll receive by email.


Open the ova file in VMWare workstation.

– To register the trial version, bridge the first interface to your network connected to Internet.


– Start the VM and log in to the console with root/default


– Type “config” to access the text user interface.

selection_345– Choose “No” to configure a static IP address, a mask and a default gateway in the same subnet of the bridged network. Or “Yes” if you have a dhcp server and want to get a dynamic IP.

– Check the interface IP.


– Ping an Internet host ex: to verify the connectivity and name resolution.

– Browse Big-IP interface IP and accept the certificate.


Use admin/admin for login credentials


Put the key you received by email in the Base Registration Key field and push “Next”, wait a couple of seconds for activation and you are good to go.

Now you can shutdown F5 VM.


3. Building the topology


Importing F5 VE Virtual machine to GNS3

From GNS3 “preference” import a new VMWare virtual machine


Choose BIG-IP VM we have just installed and activated


Make sure to set minimum 3 adapters and allow GNS3 to use any of the VM interfaces



Now we can build our topology using BIG-IP VM and the Docker images installed.

Below, is an example of topology in which we will import F5 VM and put some containers..


– 3 nginx containers

– 1 Openvswitch


– GUI browser webterm container

– 1 Openvswitch

– Cloud mapped to host interface tap0


– Apache Benchmark (ab) container

– GUI browser webterm container


Notice the BIG-IP VM interface e0, the one priorly bridged to host network, is now connected to a browser container for management.

I attached the host interface “tap0” to the management switch because, for some reasons, without it, arp doesn’t work on that segment.


Address configuration:

– Assign each of the nginx container an IP in subnet of your choice (ex:


In the same manner: for ajnouri/nginx-2 for ajnouri/nginx-3


On all three nginx containers, start nginx and php servers:

service php5-fpm start

service nginx start


– Assign an IP to the management browser in the same subnet as BIG-IP management IP for gns3/webterm-2

– Assign addresses default gateway and dns server to ab container and webterm-1 containers



And make sure both client devices resolve  ajnouri.local host to BIG-IP address

echo "   ajnouri.local" >> /etc/hosts

– Openvswitch containers don’t need to be configured, it acts like a single vlan.

– Start the topology


4. Setting F5 Big-IP interfaces


To manage the load balancer from the webterm-2, open the console to the container, this will open a Firefox from the container .


Browse the VM management IP and exception for the certificate and log in with F5 BigIP default credentials admin/admin.


Go through the initial configuration steps

– You will have to set the hostname (ex: ajnouri.local), change the root and admin account passwords


You will be logged out to take into account password changes, log in back

– For the purpose of this lab, not redundancy not high availability

– Now you will have to configure internal (real servers) and external (client side) vlans and associated interfaces and self IPs.

(Self IPs are the equivalent of VLAN interface IP in Cisco switching)

Internal VLAN (connected to web servers):


External VLAN (facing clients):


5. Connectivity check between devices


Now make sure you have successful connectivity from each container to the corresponding Big-IP interface.

Ex: from ab container


Ex: from nginx-1 server container



The interface connected to your host network will get ip parameters (address, gw and dns) from your dhcp server.


6. Load balancing configuration


Back to the browser webterm-2

For BIG-IP to load balance http requests from client to the servers, we need to configure:

  • Virtual Server: single entity (virtual server) visible to client0
  • Pool : associated to the Virtual server and contains the list of real web servers to balance between
  • Algorithm used to load balance between members of the pool

– Create a pool of web servers “Pool0” with “RoundRobin” as algorithm and http as protocol to monitor the members.


-Associate to the virtual server “VServer0” to the pool “Pool0”


Check the network map to see if everything is configured correctly and monitoring shows everything OK (green)


From client container webterm-1, you can start a firefox browser (console to the container) and test the server name “ajnouri/local”


If everything is ok, you’ll see the php page showing the real server ip used, the client ip and the dns name used by the client.

Everytime you refresh the page, you’ll see a different server IP used.


7. Performance testing


with Apache Benchmark container ajnouri/ab, we can generate client request to the load balancer virtual server by its hostname (ajnouri.local).

Let’s open an aux console to the container ajnouri/ab and generate 50.000 connections with 200 concurrent ones to the url ajnouri.local/test.php

ab -n 50000 -c 200 ajnouri.local/test.php



8. Monitoring load balancing


Monitoring the load balancer performance shows a peek of connections corresponding to Apache benchmark generated requests


In the upcoming part-2, the 3 web server containers are replaced with a single container in which we can spawn as many servers as we want (Docker-in-Docker) as well as test a custom python client script container that generates http traffic from spoofed IP addresses as opposed to a container  (Apache Benchmark) that generate traffic from a single source IP.

Deploying Cisco traffic generator in GNS3

Goal: Deploy TRex, a realistic Cisco traffic generator, to test devices in GNS3.

TRex traffic generator is a tool designed to benchmark platforms using realistic traffic.
One of the tools through which TRex can be learned and tested is a virtual machine instance, fully simulating TRex without the need for any additional hardware.

The TRex Virtual Machine is based on Oracle’s Virtual Box freeware.
It is designed to enable TRex newbies to explore this tool without any special resources.

Download the virtual appliance ova file:

Open the image in VMWare (I am using VMWare workstation)

From GNS3 import the VMWare device:

Edit the VM template and make sure to select “Allow GNS3 to use any configured VMware adapter”


Insert the a device to test, DUT (Device Under Test), in our case it is a Cisco IOU router and build the following topology, in which TRex will play the role of the client and the server for the generated traffic.



Because TRex doesn’t implement ARP, we have to manually indicate the router MAC addresses of the directly connected interfaces.
You can set TRex to match the DUT MACs or DUT to match the default MAC configured on TRex. We opt for the first solution:

Note the router interface MAC addresses:


Login to TRex through the console:

  • Username: trex
  • Password: trex

and edit Trex configuration file


and change the DUT MACs

Screenshot - 260716 - 23:33:48

Make sure the list of interfaces ids match the ones defined by

cd v1.62

sudo ./ –status


We also need to set our router under test with the MAC addersses used by TRex for the traffic.

On the IOU router:

IOU1(config-if)#int e0/0
IOU1(config-if)#ip address
IOU1(config-if)#du fu
IOU1(config-if)#no sh
IOU1(config-if)#int e0/1
IOU1(config-if)#ip address
IOU1(config-if)#du fu
IOU1(config-if)#no sh

IOU1(config)#arp  0800.2723.21dc ARPA
IOU1(config)#arp  0800.2723.21dd ARPA
IOU1(config)#do sh arp
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet            –   0800.2723.21dc  ARPA
Internet            –   0800.2723.21dd  ARPA

e0/1 and e0/2 IP addresses are configured with and In fact it doesn’t matter for TRex because we have routes to forward traffic out the appropriate interfaces to reach TRex interfaces.

On the router set routes to the emulated client and servers:

ip route
ip route

For this lab we will generate IMIX traffic (64byte UDP packets profile) from emulated clients and servers using virtual IP range configurable in 16.0.0.[1-255] and 48.0.[0.1-255.255]

Back to TRex:



So let’s configure our router to route traffic destined to previous ranges out the appropriate interfaces.

IOU router:

IOU1(config)#ip route
IOU1(config)#ip route

Start the emulation on Trex:

sudo ./t-rex-64 -f cap2/imix_64.yaml  -d 60 -m 40000  -c 1


You can observe the generated traffic passing through the router with Wireshark


For more inf. please refer to


GNS3 + Docker: Internet modem container

Goal: Deploy internet modem for GNS3 topology using Docker container. The container uses iptables to perform NAT (masquerading) and dnsmasq as DHCP server for LAN interfaces.

Used Docker images:

GNS3 host preparation : This is performed on GNS3 linux host

From GNS3 host console, create a tap interface (tap0) and put it along with the physical interface (eth0) in a bridge (ex: ovsbr0):

ip tuntap add dev tap0 mode tap user <username>

sudo ovs-vsctl add-br ovsbr0

sudo ovs-vsctl add-port ovsbr0 tap0

You can use either linux bridge (brctl command) or OpenVswitch bridge (ovs-vsctl command)

sudo ovs-vsctl show


Bridge “ovsbr0”

Port “tap0”

Interface “tap0”

Port “ovsbr0”

Interface “ovsbr0”

type: internal

Port “eth0”

Interface “eth0”

ovs_version: “2.3.0”

Remove ip address from eth0 (or release dhcp parameters) then reconfigure IP address and default gateway (or request dhcp) for the ovs bridge ovsbr0

Import containers

1- Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers and then “New”.

Choose “New image” option and the name ajnouri/internet

Screenshot - 170716 - 18:49:03

Accept all default parameters.

2- Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers and then “New”.

Choose “New image” option and the name gns3/openvswitch

Screenshot - 170716 - 18:49:12

Set the number of interfaces to eight and accept default parameters with “next” until “finish”.

3- Same for end host container. From GNS3, create new docker template Edit > Preferences > Docker > Docker containers and then “New”.

Choose “New image” option and the name gns3/endhost.

Screenshot - 170716 - 18:49:21

Next you can choose a template name for the container, in this case I renamed it as “dvpc”.

Accept default parameters with “next” until “finish”.

GNS3 Topology

Insert a cloud to the topology and map it to tap0

Screenshot - 170716 - 18:49:31

Build the below topology

Screenshot - 170716 - 18:49:40

Configure containers network interfaces:

Internet container ajnouri/Internet-1

Screenshot - 170716 - 18:50:33

End host container dvpc-1

Screenshot - 170716 - 18:50:49

The WAN interface of the Internet container should have been assigned an IP and gateway from your physical network (connected to internet).

Start the script from /data directory

You will be asked to set the LAN and WAN interfaces as well as the IP range for dhcp clients connected to LAN interface, then the script will start dnsmasq and set iptables for NAT (masquerade)

ajnouri/internet-1 console

Screenshot - 170716 - 18:51:15

ajnouri/dvpc-1 console

Screenshot - 170716 - 18:51:37

Other dhcp parameters assigned to the client are taken from Internet device WAN interface DHCP parameters.

Connectivity check


Let’s have fun! Now that we have internet connectivity, install a text-based browser package on the end host container


Start elinks and browse Internet


For more comfortable browsing experience, you can use the image gns3/webterm.

Create a new Docker template


Choose vnc as the console type to allow GUI browsing of Firefox


And keep the remaining default parameters.

Insert the image and connect it to the topology as follow:


Set the container interface for dhcp client


Start the stopped containers and console (vnc) to Webterm container.

(gns3/openvswitch doesn’t need any configuration)


You should get this







DockerVPC: Using containers in GNS3 as Linux Virtual hosts instead of VPCS

More updated content about GNS3 and natively integrated Docker.


I would like to share with you DockerVPC, a bash script that helps running containers for use within GNS3 as rich virtual end-host instead of VPCS.

I am using it to avoid dealing directly with docker commands and container id’s each time I would like to rapidly deploy some disposable end-host containers inside GNS3.

For now it runs only on linux platforms.  and tested on Ubuntu, RedHat and OpenSUSE.

Using DockerVPC doesn’t require knowledge of Docker containers, still I encourage you to take a look at this short introduction.

By the way, VIRL in its recent updates introduced lxc containers to simulate Ubuntu server (multiprocess environment) as well as single process container for iperf.

It is possible to implement docker containers on Windows or Mac OS X 
using lightweight boot2docker virtual machine or the newer Docker tool Kitematic,, 
The issue is that, there is no such tool as pipework for windows or Mac to set additional interfaces.
I use this is a temporary solution knowing that, Docker is on the way to 
be integrated to GNS3, until then, you can already take maximum profit 
of containers inside GNS3. (See Issues and limitations below)

The linux image used by DockerVPC is pre-built with the following apps:

  • SSH server.
  • Apache + PHP
  • Ostinato / D-ITG / Iperf.
  • BIRD Internet routing daemon.
  • Linphone / sipp / pjsua. (VoIP host-to-host through GNS3 works perfectly)
  • IPv6 THC tools.
  • VLC (VideoLAN).
  • Qupzilla browser + java & html5 plugins / links.
  • vSFTPd server + ftp client.
  • And many other tools: inetutils-traceroute, iputils-tracepath, mtr..

Which makes it almost a full-fledged Linux host.


By default containers are connected to the host through docker0 bridge, this tool allows you to connect the running containers to GNS3 through additional bridge interfaces so you can bind them to cloud elements in your GNS3 topology. In other words, containers run independently of GNS3. More on that in Simple lab.

Additionally, this script allows you to separately manage additional container images like cacti server or a 16-port (host bridges) OpenVSwitch.

For now, all you have to do is install the required applications and clone the repository

Installing requirements

You will need: git, docker, pipework and lxterminal.


sudo apt-get install git

2.Docker easy to install 

docker -v
Docker version 1.8.1, build d12ea79

3.pipework, a simple yet powerful bash script, for advanced docker networking 

sudo bash -c “curl > /usr/local/bin/pipework”
sudo chmod a+x /usr/local/bin/pipework


lxterminal is not required anymore, the script will detect the used terminal and use it to open interactive terminal access to containers.

To use docker as non-root user

sudo usermod -aG docker {user}

Clone DockerVPC repository

git clone

cd DockerVPC

Here are some examples (on my GNS3 community blog) of how to use DockerVPC container with GNS3.

Once the installation is done and the images pulled, creating virtual end-hosts is a matter of seconds.

DockerVPC labs

Issues and limitations:

  • Originally, docker containers are not meant to run GUI applications, this is a workaround brought by docker community (by mounting docker host X11 and sound devices), so we must expect some issues with that.
  • By default, Docker networking uses a single interface bridged to docker0. So, using additional container interfaces will bring additional complexity to networking configuration.
  • DockerVPC is relying on pipework, an external script for advanced networking. Though this is an advantage comparing to the limited (for now) integrated networking functionalities, it brings new challenges.
  • Bridge interfaces created with pipework do not persist after stopping the container or docker host reboot, so make sure to reconfigure your container networking parameters after you restart a stopped container.

This brings us to the conclusion that using Docker containers this way, it is NOT MEANT FOR PRODUCTION !!!

The purpose of DockerVPC is to hopefully give GNS3 users more flexibility with end-host simulation.

Hope you will find it useful!


Further readings:

%d bloggers like this: