Testing Docker networking with GNS3, Part1: MaCVLAN


Introduction

MacVLAN allows to connect containers in separate docker networks to your VLAN infrastructure, so they act like being directly connected to your network.

From the main interface, MacVLAN driver creates subinterfaces to handle 802.1q tags for each VLAN, and assign to them separate IP and MAC addresses.

Because the main interface (with its own MAC) has to accept traffic toward subinterfaces (with their own MACs), Docker network driver MacVLAN requires Docker host interface to be in promiscuous mode.

Knowing that, most cloud providers (aws, azure…) do not allow promiscuous mode, you’ll be deploying MACVLAN on your own premises.

MacVLAN network characteristics:

Creates subintefaces to process VLAN tags
Assign different IP and MAC addresses to each subinterface
Requires the main Docker host intreface to function in promiscuous mode to accept traffic not destined to main interface MAC.
The admin needs to carefully assign ranges of IP’s to VLANs in different Docker nodes in harmony with an eventual DHCP range used by the existing VLANs

Conceptual diagram: Docker node connected to the topology

Selection_0083

Conceptual diagram: MACVLAN configured

Selection_0082

Conceptual diagram: Logical functioning of MACVLAN

Selection_0084

Purpose of this lab:

  • To test and get hands on practice with Docker MACVLAN.
  • It is easy to deploy complex topologies in GNS3 using a meriad of virtual appliances https://gns3.com/marketplace/appliances.  Building a topology is as easy as dragging devices and drawing the connections betweeen them.

star_fullNote:

It is better to have some basic practical knowledge of docker containers.

1- GNS3 topology:

Selection_0080

Devices used:

  • Two VMWare Virtual machines for docker nodes, imported into GNS3.
  • Two OpenvSwitch containers gns3/openvswitch. Import & Insert
  • Ansible container ajnouri/ansible used as SSH client to manage Docker nodes. In another post I’ll be showing how to use it to provision package installation to any device (ex: Docker instalaltion to VMWare nodes).
  • Cisco IOSv 15.6(2)T: Route-on-a-stick used to route traffic from each vlan to outside world (PAT) and deploy communication policy between VLANs. Import & Insert

star_fullNote:

Atually importing a container into GNS3 is very easy and intuitive, here is a video from David Bomball explaining the process.

  • Create two VMWare Ubuntu xenial LTS servers to be used as Docker nodes, with 1Gig RAM and 2 interfaces.

  • Install Docker min 1.12 (latest recommended).

Here is the script if you want to automate the deployment of Docker, for example from an ansible container, like shown in this GNS3 container series (managing GNS3 appliances with Ansible).

#!/bin/bash
 ### Install GNS3
 sudo add-apt-repository ppa:gns3/ppa
 sudo apt-get update
 sudo apt-get install -y gns3-gui
 sudo apt-get install -y gns3-server# Add Oficial docker repository GPG signature
 
### Install Docker
 # https://docs.docker.com/engine/installation/linux/ubuntu/
 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -# Add apt repository sources
 sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
 sudo apt-get update

# Install Docker CE
 sudo apt-get install -y docker-ce# Docker without sudo
 sudo groupadd docker
 sudo gpasswd -a $USER docker
 $docker version
Client:
Version:      17.03.1-ce
API version:  1.27
Go version:   go1.7.5
Git commit:   c6d412e
Built:        Mon Mar 27 17:14:09 2017
OS/Arch:      linux/amd64
Server:
Version:      17.03.1-ce
API version:  1.27 (minimum version 1.12)
Go version:   go1.7.5
Git commit:   c6d412e
Built:        Mon Mar 27 17:14:09 2017
OS/Arch:      linux/amd64
Experimental: false

Docker node interfaces:

Main interface: e0/0 (Ubuntu: ens33), a trunk interface used to connect containers to your network VLANs

Management interface: e1/0 (ubuntu: ens38) connected to the common VLAN114

Interface configuration (/etc/netwoork/interfaces):
# The primary network interface
auto ens33
iface ens33 inet manual
auto ens38
iface ens38 inet static
address 192.168.114.32
netmask 24
gateway 192.168.114.200
up echo "nameserver 8.8.8.8" > /etc/resolv.conf

# autoconfigured IPv6 interfaces
iface ens33 inet6 auto
iface ens38 inet6 auto

Promiscuous mode:

Without proomiscuous mode, containers will not be able to communicate with hosts outside of docker node, because the main interface (connected to the VLAN network) will not accept traffic to other MAC addresses (those of MACVLAN)

Promiscuous mode is configured in two steps:

Configuring Promiscuous mode on VMWare guest:

Add the below command to /etc/rc.local

ifconfig ens33 up promisc

Check for the letter “P” for Promiscuous

netstat -i
Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
docker0    1500 0         0      0      0 0             0      0      0      0 BMU
ens33      1500 0        25      0      0 0            32      0      0      0 BMPRU
ens38      1500 0        55      0      0 0            60      0      0      0 BMRU
ens33.10   1500 0         0      0      0 0             8      0      0      0 BMRU
ens33.20   1500 0         0      0      0 0             8      0      0      0 BMRU
ens33.30   1500 0         0      0      0 0             8      0      0      0 BMRU
lo        65536 0       160      0      0 0           160      0      0      0 LRU
Authorizations for Promiscuous mode on VMWare host:

By default, VMWare interfaces will not function in promiscuous mode because a regular user will not have write access to /dev/vmnet* files.

So, Create a special group, include the user running vmware in the group and allow th rgroup to have right access to /dev/vmnet* files :

sudo groupadd promiscuous
sudo usermod -a -G promiscuous $USER
chgrp promiscuous /dev/vmnet*
chmod g+rw /dev/vmnet*

Or simply give right access to everyone:

chmod a+rw /dev/vmnet*

For permanent change, put it in  /etc/init.d/vmware file as follow:

vmwareStartVmnet() {
 vmwareLoadModule $vnet
 "$BINDIR"/vmware-networks --start >> $VNETLIB_LOG 2>&1
 chmod a+rw /dev/vmnet*
}

GNS3 VLAN topology:

For each Docker node, connect the first interface to OpenVswitch1 trunk interface and the second interface to a VLAN interface 114.
VLAN 114 is a common VLAN used to reach and manage all other devices.

GNS3 integrates Docker, so you can use containers as simple endhost devices (independently of docker network drivers):

  • gns3/openvswitch container: Simple L2 switch
  • gns3/webterm container: GUI Firefox browser (no need for entire VM for that)
  • ajnouri/ansible container: the management endhost used to access Docker nodes thourgh SSH. In subsequent lab, I’ll be showing how top manage GNS3 devices from this Ansible container.

Docker MACVLAN network allows to connect your containers to an existing network vlans seamlessly as they were directly connected to your VLAN infrstustructre.

The network is deploying three isolated VLANs (id: 10, 20 and 30) and vlan id 114 able to communicate with all three VLANs through a router on a stick (Cisco IOSv 15.6T).

MacVLAN generates subinterfaces (.) to process (tag/untag) traffic.

The parent (main) interface will act as a trunk interface carrying all vlans from “children” interfaces, so the network switch interface linked to it should be a trunk port.

OpenVswitch1 ports:

First, let’s clean the configuration and then reintroduce trunk and vlan ports:

for br in `ovs-vsctl list-br`; do ovs-vsctl del-br ${br}; done

#Trunk ports:
ovs-vsctl add-port br0 eth1
ovs-vsctl add-port br0 eth2
ovs-vsctl add-port br0 eth6

#vlan ports:
ovs-vsctl add-port br0 eth2 tag=114
ovs-vsctl add-port br0 eth4 tag=114
ovs-vsctl add-port br0 eth7 tag=114
ovs-vsctl show
7afbe760-5237-4ae4-a7e6-ac5b4f1cc6df
Bridge "br0"
…
Port "eth1"
Interface "eth1"

In OVS, untagged ports acts as trunk.

OpenVswitch2 ports:

Openvswitch 2 connects the two management endhosts, Ansible container and Firefox browser container.

for br in `ovs-vsctl list-br`; do ovs-vsctl del-br ${br}; done

#Trunk ports:
ovs-vsctl add-port br0 eth7

#vlan ports:
ovs-vsctl add-port br0 eth0 tag=114
ovs-vsctl add-port br0 eth1 tag=114

For more information on how to configure advanced switching features with ovs, please refer to my gns3 blog post on gns3 community.

Cisco router-on-a-stick configuration:

This router is used to allow inter-vlan communications between VLAN114 and all other VLANs, deny communications between VLANs 10,20 and 30, and connect the entire topolgy to Internet using PAT (Port Address Translation ~ Linux MASQUERADING).

ROAS#sh run
Building configuration...Current configuration : 6093 bytes
!
version 15.6
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname ROAS
!
boot-start-marker
boot-end-marker
!
!
logging buffered 1000000
!
no aaa new-model
ethernet lmi ce
!
!
!
mmi polling-interval 60
no mmi auto-configure
no mmi pvc
mmi snmp-timeout 180
!
!
!
!
!
!
!
!
!
!
!
no ip domain lookup
ip cef
no ipv6 cef
!
multilink bundle-name authenticated
!
!
!
crypto pki trustpoint TP-self-signed-4294967295
enrollment selfsigned
subject-name cn=IOS-Self-Signed-Certificate-4294967295
revocation-check none
rsakeypair TP-self-signed-4294967295
!
!
crypto pki certificate chain TP-self-signed-4294967295
certificate self-signed 01
3082022B 30820194 A0030201 02020101 300D0609 2A864886 F70D0101 05050030
31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274
69666963 6174652D 34323934 39363732 3935301E 170D3137 30363138 31313034
31345A17 0D323030 31303130 30303030 305A3031 312F302D 06035504 03132649
4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D34 32393439
36373239 3530819F 300D0609 2A864886 F70D0101 01050003 818D0030 81890281
8100B306 1D16E9A7 67E556AD A2A5DEF2 4914C183 5C6B5C7B 9A37CE29 A53F61BB
6FED6E2C 3E4E8E67 355560A7 818590CC 4410B87B 72126999 465A45D4 4627F5DC
185E545B 492840DA A8DB88B3 AC8DBE34 D3109B8D AD4A5522 6C7325E6 405DE12B
91B30192 64AC93BB 618FADB8 2F6F94E0 779B80FF 5002DEA0 1AD6F6D0 5C289790
95590203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF 301F0603
551D2304 18301680 14BF7E97 AE5F2D93 86F08CF4 ED9C8FF0 E92C5D8E D3301D06
03551D0E 04160414 BF7E97AE 5F2D9386 F08CF4ED 9C8FF0E9 2C5D8ED3 300D0609
2A864886 F70D0101 05050003 818100A3 76F489B3 BF33FA87 8E4DD1B5 85913A54
428FB7F2 1D1FDF3E 6D18E3B3 CE0F9400 C574B89C A2D7E89E 7F13AA3F BB4F9B19
10490BF7 4F7C0B3C 70516F75 5C26078F 6A4A14A3 370B63EC 76376758 1B614B98
B4A4FF1D 1B4F7C88 60BFAF98 AF822BB5 DCF6FA16 A31DAD0D 89F53E60 24305110
64839C15 1865D92A D8153B73 8FB8C1
quit
!
redundancy
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
interface GigabitEthernet0/0
no ip address
duplex full
speed auto
media-type rj45
!
interface GigabitEthernet0/0.10
encapsulation dot1Q 10
ip address 10.0.0.200 255.255.255.0
ip access-group 101 in
ip access-group 101 out
ip nat inside
ip virtual-reassembly in
!
interface GigabitEthernet0/0.20
encapsulation dot1Q 20
ip address 20.0.0.200 255.255.255.0
ip access-group 101 in
ip access-group 101 out
ip nat inside
ip virtual-reassembly in
!
interface GigabitEthernet0/0.30
encapsulation dot1Q 30
ip address 30.0.0.200 255.255.255.0
ip access-group 101 in
ip access-group 101 out
ip nat inside
ip virtual-reassembly in
!
interface GigabitEthernet0/0.114
encapsulation dot1Q 114
ip address 192.168.114.200 255.255.255.0
ip nat inside
ip virtual-reassembly in
!
interface GigabitEthernet0/1
ip address 192.168.66.200 255.255.255.0
duplex full
speed auto
media-type rj45
!
interface GigabitEthernet0/2
no ip address
shutdown
duplex auto
speed auto
media-type rj45
!
interface GigabitEthernet0/3
ip address dhcp
ip nat outside
ip virtual-reassembly in
duplex full
speed auto
media-type rj45
!
ip forward-protocol nd
!
!
ip http server
ip http authentication local
ip http secure-server
ip nat inside source list 100 interface GigabitEthernet0/3 overload
ip ssh rsa keypair-name ROAS.cciethebeginning.wordpress.com
ip ssh version 2
!
!
!
access-list 100 permit ip 192.168.114.0 0.0.0.255 any
access-list 100 permit ip 10.0.0.0 0.0.0.255 any
access-list 100 permit ip 20.0.0.0 0.0.0.255 any
access-list 100 permit ip 30.0.0.0 0.0.0.255 any
access-list 101 deny ip 10.0.0.0 0.0.0.255 20.0.0.0 0.0.0.255
access-list 101 deny ip 20.0.0.0 0.0.0.255 10.0.0.0 0.0.0.255
access-list 101 deny ip 10.0.0.0 0.0.0.255 30.0.0.0 0.0.0.255
access-list 101 deny ip 30.0.0.0 0.0.0.255 10.0.0.0 0.0.0.255
access-list 101 deny ip 20.0.0.0 0.0.0.255 30.0.0.0 0.0.0.255
access-list 101 permit ip 192.168.114.0 0.0.0.255 any
access-list 101 permit ip any 192.168.114.0 0.0.0.255
access-list 101 permit ip any any
!
control-plane
!
banner exec ^C
**************************************************************************
* IOSv is strictly limited to use for evaluation, demonstration and IOS *
* education. IOSv is provided as-is and is not supported by Cisco's *
* Technical Advisory Center. Any use or disclosure, in whole or in part, *
* of the IOSv Software or Documentation to any third party for any *
* purposes is expressly prohibited except as otherwise authorized by *
* Cisco in writing. *
**************************************************************************^C
banner incoming ^C
**************************************************************************
* IOSv is strictly limited to use for evaluation, demonstration and IOS *
* education. IOSv is provided as-is and is not supported by Cisco's *
* Technical Advisory Center. Any use or disclosure, in whole or in part, *
* of the IOSv Software or Documentation to any third party for any *
* purposes is expressly prohibited except as otherwise authorized by *
* Cisco in writing. *
**************************************************************************^C
banner login ^C
**************************************************************************
* IOSv is strictly limited to use for evaluation, demonstration and IOS *
* education. IOSv is provided as-is and is not supported by Cisco's *
* Technical Advisory Center. Any use or disclosure, in whole or in part, *
* of the IOSv Software or Documentation to any third party for any *
* purposes is expressly prohibited except as otherwise authorized by *
* Cisco in writing. *
**************************************************************************^C
!
line con 0
line aux 0
line vty 0 4
privilege level 15
login
transport input telnet ssh
!
no scheduler allocate
!
endROAS#

Now you can scale your infrastructure by adding any numbe of new docker nodes.

2- Configuring MACVLAN network on docker node1

1) Create MacVLAN networks

Create MacVLAN networks with the following parameters:

  • type=MacVLAN
  • subnet & ip range from which the container will get their IP parameters
  • Gateway of the VLAN in question
  • parent interface
  • MacVLAN network name
docker network create -d macvlan \
--subnet 10.0.0.0/24 \
--ip-range=10.0.0.0/26 \
--gateway=10.0.0.200 \
-o parent=ens33.10 macvlan10

docker network create -d macvlan \
--subnet 20.0.0.0/24 \
--ip-range=20.0.0.64/26 \
--gateway=20.0.0.200 \
-o parent=ens33.20 macvlan20

docker network create -d macvlan \
--subnet 30.0.0.0/24 \
--ip-range=30.0.0.128/26 \
--gateway=30.0.0.200 \
-o parent=ens33.30 macvlan30

List created the created subinterfaces with “ip a”

List docker networks with “docker network ls” and make sure the three macvlans are created

docker network  ls
NETWORK ID          NAME                DRIVER              SCOPE
916165cd344c        bridge              bridge              local
686ebb8c5399        host                host                local
b3c9487a6cd0        macvlan10           macvlan             local
e1818c46a437        macvlan20           macvlan             local
52ce778548c3        macvlan30           macvlan             local
d97f45467edd        none                null                local

as an example, let’s inspect docker network macvlan10

docker network inspect macvlan10
docker network inspect macvlan10
 [
 {
 “Name”: “macvlan10”,
 “Id”: “b3c9487a6cd09054f06e22cf04181473819236d06245710f3763489a326770d2”,
 “Created”: “2017-06-20T14:36:02.581834167+02:00”,
 “Scope”: “local”,
 “Driver”: “macvlan”,
 “EnableIPv6”: false,
 “IPAM”: {
 “Driver”: “default”,
 “Options”: {},
 “Config”: [
 {
 “Subnet”: “10.0.0.0/24”,
 “IPRange”: “10.0.0.0/26”,
 “Gateway”: “10.0.0.200”
 }
 ]
 },
 “Internal”: false,
 “Attachable”: false,
 “Containers”: {},
 “Options”: {
 “parent”: “ens33.10”
 },
 “Labels”: {}
 }
 ]

docker network inspect macvlan10

[

{

"Name": "macvlan10",

"Id": "b3c9487a6cd09054f06e22cf04181473819236d06245710f3763489a326770d2",

"Created": "2017-06-20T14:36:02.581834167+02:00",

"Scope": "local",

"Driver": "macvlan",

"EnableIPv6": false,

"IPAM": {

"Driver": "default",

"Options": {},

"Config": [

{

"Subnet": "10.0.0.0/24",

"IPRange": "10.0.0.0/26",

"Gateway": "10.0.0.200"

}

]

},

"Internal": false,

"Attachable": false,

"Containers": {},

"Options": {

"parent": "ens33.10"

},

"Labels": {}

}

]

Notice that, no containers are attached to the network:   “Containers”: {},

Let’s remediate to that by running simple apache containers from a custom image ajnouri/apache_ssl_container image (You can use other appropriate image with “bash/sh” running on the console) and connect them respectively to macvlan10, macvlan20 and macvlan30.

2) start and connect containers to MacVLAN networks

docker run --net=macvlan10 -dt --name c11 --restart=unless-stopped ajnouri/apache_ssl_container
docker run --net=macvlan20 -dt --name c12 --restart=unless-stopped ajnouri/apache_ssl_container
docker run --net=macvlan30 -dt --name c13 --restart=unless-stopped ajnouri/apache_ssl_container

The first time, docker will download the image, then any container from that image is created instantly.

docker run”  command options:

  • –net=macvlan10 : macvlan name
  • -dt : run a console on the background
  • –name c11: container name
  • –restart=unless-stopped : if Docker host restart, containers are started & connected to their networks, except if they are intentionally stopped.
  • ajnouri/apache_ssl_container : custom container with Apache SSL installed & small php script to detect session ip addresses

List running containers with “docker ps


$ docker ps
CONTAINER ID        IMAGE                          COMMAND                  CREATED             STATUS              PORTS               NAMES
1a2104d84519        ajnouri/apache_ssl_container   "/bin/sh -c 'servi..."   6 days ago          Up 14 minutes                           c13
691b468918ee        ajnouri/apache_ssl_container   "/bin/sh -c 'servi..."   6 days ago          Up 14 minutes                           c12
1e2bb1933d10        ajnouri/apache_ssl_container   "/bin/sh -c 'servi..."   6 days ago          Up 14 minutes   

And inspect macvlan attached containers with “docker network inspect macvlan10

$ docker network inspect macvlan10
[
{
"Name": "macvlan10",
"Id": "b3c9487a6cd09054f06e22cf04181473819236d06245710f3763489a326770d2",
"Created": "2017-06-20T14:36:02.581834167+02:00",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.0.0.0/24",
"IPRange": "10.0.0.0/26",
"Gateway": "10.0.0.200"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"1e2bb1933d10f94f2aeb3e83deb4f141d393fc6cfbf09e415ebd1239e421b50f": {
"Name": "c11",
"EndpointID": "e2bc0ea1d3cf9806cf880e6cdb34e4914d84a75e1eabc90a8429f7f7668f82b7",
"MacAddress": "02:42:0a:00:00:01",
"IPv4Address": "10.0.0.1/24",
"IPv6Address": ""
}
},
"Options": {
"parent": "ens33.10"
},
"Labels": {}
}
]

Let’s inspect macvlan networks again for connected containers with "docker network inspect macvlan10"

Container c11 is connected to macvlan10 (10.0.0.0/24) and got dynamically an ip from that range.

3) check connectivity

Now let’s do some connectivity checks inside container c11 (macvlan10) and see if it can reach its gateway (Router ona stick) outside docker host.

ajn@ubuntu:~$ docker exec c11 ping -t3 10.0.0.200
PING 10.0.0.200 (10.0.0.200) 56(84) bytes of data.
64 bytes from 10.0.0.200: icmp_seq=1 ttl=255 time=1.07 ms
64 bytes from 10.0.0.200: icmp_seq=2 ttl=255 time=1.50 ms
64 bytes from 10.0.0.200: icmp_seq=3 ttl=255 time=1.27 ms
64 bytes from 10.0.0.200: icmp_seq=4 ttl=255 time=1.40 ms
64 bytes from 10.0.0.200: icmp_seq=5 ttl=255 time=1.29 ms
^C

ajn@ubuntu:~$ docker exec c12 ping 20.0.0.200
PING 20.0.0.200 (20.0.0.200) 56(84) bytes of data.
64 bytes from 20.0.0.200: icmp_seq=1 ttl=255 time=2.22 ms
64 bytes from 20.0.0.200: icmp_seq=2 ttl=255 time=1.34 ms
64 bytes from 20.0.0.200: icmp_seq=3 ttl=255 time=1.23 ms
64 bytes from 20.0.0.200: icmp_seq=4 ttl=255 time=1.41 ms
^C

ajn@ubuntu:~$ docker exec c13 ping 30.0.0.200
PING 30.0.0.200 (30.0.0.200) 56(84) bytes of data.
64 bytes from 30.0.0.200: icmp_seq=1 ttl=255 time=1.76 ms
64 bytes from 30.0.0.200: icmp_seq=2 ttl=255 time=1.36 ms
64 bytes from 30.0.0.200: icmp_seq=3 ttl=255 time=1.42 ms
64 bytes from 30.0.0.200: icmp_seq=4 ttl=255 time=1.51 ms
64 bytes from 30.0.0.200: icmp_seq=5 ttl=255 time=1.39 ms
^C

Yes!!!

And can even reach Internet, thanks to router-on-stick

docker exec c11 ping gns3.com
PING gns3.com (104.20.168.3) 56(84) bytes of data.
64 bytes from 104.20.168.3: icmp_seq=1 ttl=51 time=2.54 ms
64 bytes from 104.20.168.3: icmp_seq=2 ttl=51 time=2.82 ms
64 bytes from 104.20.168.3: icmp_seq=3 ttl=51 time=2.62 ms
64 bytes from 104.20.168.3: icmp_seq=4 ttl=51 time=2.82 ms
^C

And that’s not all, it can even reach other containers, if you allow it of course. Cisco router used for that purpose, you can play with access control lists to implement the policy you want.

3- Configuring MACVLAN network on docker node2

The same steps are applied to Docker node 2. Interfaces are connected in the same way as node1: one interface to common vlan 114 and another trunk interface to create c21, c22 and C23 connected respectively to MacVLANs macvlan10, macvlan20 and macvlan30 (same as node1).

Node2 used different ip ranges than those used for each node1 VLAN:

VLAN subnets Node1 Node2
macvlan10 (10.0.0.0/24) 10.0.0.0/26 10.0.0.64/26
macvlan20 (20.0.0.0/24) 20.0.0.64/26 20.0.0.0/26
macvlan30 (30.0.0.0/24) 30.0.0.128/26 30.0.0.0/26

1) Create MacVLAN networks

docker network create -d macvlan \
--subnet 10.0.0.0/24 \
--ip-range=10.0.0.64/26 \
--gateway=10.0.0.200 \
-o parent=ens33.10 macvlan10


docker network create -d macvlan \
--subnet 20.0.0.0/24 \
--ip-range=20.0.0.0/26 \
--gateway=20.0.0.200 \
-o parent=ens33.20 macvlan20


docker network create -d macvlan \
--subnet 30.0.0.0/24 \
--ip-range=30.0.0.0/26 \
--gateway=30.0.0.200 \
-o parent=ens33.30 macvlan30

2) start and connect containers to MacVLAN networks

docker run --net=macvlan10 -dt --name c21 --restart=unless-stopped ajnouri/apache_ssl_container
docker run --net=macvlan20 -dt --name c22 --restart=unless-stopped ajnouri/apache_ssl_container
docker run --net=macvlan30 -dt --name c23 --restart=unless-stopped ajnouri/apache_ssl_container

3) check connectivity

$ docker exec c22 ping 20.0.0.200
PING 20.0.0.200 (20.0.0.200) 56(84) bytes of data.
64 bytes from 20.0.0.200: icmp_seq=1 ttl=255 time=1.06 ms
64 bytes from 20.0.0.200: icmp_seq=2 ttl=255 time=1.51 ms
64 bytes from 20.0.0.200: icmp_seq=3 ttl=255 time=1.46 ms
^C
$ docker exec c23 ping 30.0.0.200
PING 30.0.0.200 (30.0.0.200) 56(84) bytes of data.
64 bytes from 30.0.0.200: icmp_seq=1 ttl=255 time=2.11 ms
64 bytes from 30.0.0.200: icmp_seq=2 ttl=255 time=1.42 ms
64 bytes from 30.0.0.200: icmp_seq=3 ttl=255 time=1.56 ms

And according to the deployed policy on the router inter-vlan communication is not allowed

$ docker exec c23 ping 10.0.0.200
PING 10.0.0.200 (10.0.0.200) 56(84) bytes of data.
From 30.0.0.200 icmp_seq=1 Packet filtered
From 30.0.0.200 icmp_seq=2 Packet filtered
From 30.0.0.200 icmp_seq=3 Packet filtered

Now let’s check communication between c11 (node1: macvlan10) and c21 (node2: macvlan10):

docker exec c21 ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=1.45 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.879 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.941 ms
^C

Nice!

Containers connected to the same network VLAN (from different docker nodes) talk to each other.

And to finish, from the GUI browser container gns3/webterm , let’s access all containers in both nodes… Yes, in GNS3 you can run Firefox in a container, no need for an entire VM for that :P-)

Selection_0079

Troubleshooting:

Without Promiscuous mode you’ll notice that container traffic reaches the outside network, but not the other way around, as shown below in the wireshark capture:

  • 10.0.0.200=outside router
  • 10.0.0.1=container behing MACVLAN

Selection_0073

References:

Deploying F5 BIG-IP LTM VE within GNS3 (part-1)


One of the advantages of deploying VMware (or VirtualBox) machines inside GNS3, is the available rich networking infrastructure environment. No need to hassle yourself about interface types, vmnet or private? Shared or ad-hoc?

In GNS3 it is as simple and intuitive as choosing  a node interface and connect it to whatever other node interface.

In this lab, we are testing basic F5 BIG-IP LTM VE deployment within GNS3. The only Virtual machine used in this lab is F5 BIG-IP all other devices are docker containers: 

  • Nginx Docker containers for internal web servers.
  • ab(Apache Benchmark) docker container for the client used for performence testing.
  • gns3/webterm containers used as Firefox browser for client testing and F5 web management.

 

Outline:

  1. Docker image import
  2. F5 Big-IP VE installation and activation
  3. Building the topology
  4. Setting F5 Big-IP interfaces
  5. Connectivity check between devices
  6. Load balancing configuration
  7. Generating client http queries
  8. Monitoring Load balancing

Devices used:

Environment:

  • Debian host GNU/Linux 8.5 (jessie)
  • GNS3 version 1.5.2 on Linux (64-bit)

System requirements:

  • F5 Big IP VE requires 2GB of RAM (recommended >= 8GB)
  • VT-x / AMD-V support

The only virtual machine used in the lab is F5 Big-IP, all other devices are Docker containers.

 

1.Docker image import

Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers  and then “New”.

Choose “New image” option and type the Docker image in the format provided in “devices used” section <account/<repo>>, then choose a name (without the slash “/”).

Note:

By default GNS3 derives a name in the same format as the Docker registry (<account>/repository) which can cause an error in some earlier versions. In the latest GNS3 versions, the slash “/” is removed from the derived name.


Installing 
gns3/openvswitch:

selection_293
selection_278

Set the number of interfaces to eight and accept default parameters with “next” until “finish”.

– Repeat the same procedure for gns3/webterm

selection_372

Choose a name for the image (without slash “/”)

selection_340

Choose vnc as the console type to allow Firefox (GUI) browsing

selection_373

And keep the remaining default parameters.

– Repeat the same procedure for the image ajnouri/nginx.

Create a new image with name ajnouri/nginx

selection_339

Name it as you like

selection_340

And keep the remaining default parameters.

2. F5 Big-IP VE installation and activation

 

– From F5 site, import the file BIG-11.3.0.0-scsi.ova. Go to https://www.f5.com/trial/big-ip-ltm-virtual-edition.php

selection_341

You’ll have to request a registration key for the trial version that you’ll receive by email.

selection_342

Open the ova file in VMWare workstation.

– To register the trial version, bridge the first interface to your network connected to Internet.

selection_343

– Start the VM and log in to the console with root/default

selection_344

– Type “config” to access the text user interface.

selection_345– Choose “No” to configure a static IP address, a mask and a default gateway in the same subnet of the bridged network. Or “Yes” if you have a dhcp server and want to get a dynamic IP.

– Check the interface IP.

selection_346

– Ping an Internet host ex: gns3.com to verify the connectivity and name resolution.

– Browse Big-IP interface IP and accept the certificate.

selection_347

Use admin/admin for login credentials

selection_348

Put the key you received by email in the Base Registration Key field and push “Next”, wait a couple of seconds for activation and you are good to go.

selection_349
Now you can shutdown F5 VM.

 

3. Building the topology

 

Importing F5 VE Virtual machine to GNS3

From GNS3 “preference” import a new VMWare virtual machine

selection_350

Choose BIG-IP VM we have just installed and activated

selection_351

Make sure to set minimum 3 adapters and allow GNS3 to use any of the VM interfaces

selection_352

Topology

Now we can build our topology using BIG-IP VM and the Docker images installed.

Below, is an example of topology in which we will import F5 VM and put some containers..

Internal:

– 3 nginx containers

– 1 Openvswitch

Management:

– GUI browser webterm container

– 1 Openvswitch

– Cloud mapped to host interface tap0

External:

– Apache Benchmark (ab) container

– GUI browser webterm container

selection_353

Notice the BIG-IP VM interface e0, the one priorly bridged to host network, is now connected to a browser container for management.

I attached the host interface “tap0” to the management switch because, for some reasons, without it, arp doesn’t work on that segment.

 

Address configuration:

– Assign each of the nginx container an IP in subnet of your choice (ex: 192.168.10.0/24)

selection_354

In the same manner:

192.168.10.2/24 for ajnouri/nginx-2

192.168.10.3/24 for ajnouri/nginx-3

 

On all three nginx containers, start nginx and php servers:

service php5-fpm start

service nginx start

 

– Assign an IP to the management browser in the same subnet as BIG-IP management IP

192.168.0.222/24 for gns3/webterm-2

– Assign addresses default gateway and dns server to ab container and webterm-1 containers

selection_355

selection_356

And make sure both client devices resolve  ajnouri.local host to BIG-IP address 192.168.20.254

echo "192.168.20.254   ajnouri.local" >> /etc/hosts

– Openvswitch containers don’t need to be configured, it acts like a single vlan.

– Start the topology

 

4. Setting F5 Big-IP interfaces

 

To manage the load balancer from the webterm-2, open the console to the container, this will open a Firefox from the container .

selection_357

Browse the VM management IP https://192.168.0.151 and exception for the certificate and log in with F5 BigIP default credentials admin/admin.

selection_358

Go through the initial configuration steps

– You will have to set the hostname (ex: ajnouri.local), change the root and admin account passwords

selection_359

You will be logged out to take into account password changes, log in back

– For the purpose of this lab, not redundancy not high availability
selection_360

– Now you will have to configure internal (real servers) and external (client side) vlans and associated interfaces and self IPs.

(Self IPs are the equivalent of VLAN interface IP in Cisco switching)

Internal VLAN (connected to web servers):

selection_361

External VLAN (facing clients):

selection_362

5. Connectivity check between devices

 

Now make sure you have successful connectivity from each container to the corresponding Big-IP interface.

Ex: from ab container

selection_363

Ex: from nginx-1 server container

selection_364

selection_365

The interface connected to your host network will get ip parameters (address, gw and dns) from your dhcp server.

 

6. Load balancing configuration

 

Back to the browser webterm-2

For BIG-IP to load balance http requests from client to the servers, we need to configure:

  • Virtual Server: single entity (virtual server) visible to client0
  • Pool : associated to the Virtual server and contains the list of real web servers to balance between
  • Algorithm used to load balance between members of the pool

– Create a pool of web servers “Pool0” with “RoundRobin” as algorithm and http as protocol to monitor the members.

selection_366

-Associate to the virtual server “VServer0” to the pool “Pool0”

selection_367

Check the network map to see if everything is configured correctly and monitoring shows everything OK (green)

selection_368

From client container webterm-1, you can start a firefox browser (console to the container) and test the server name “ajnouri/local”

selection_369

If everything is ok, you’ll see the php page showing the real server ip used, the client ip and the dns name used by the client.

Everytime you refresh the page, you’ll see a different server IP used.

 

7. Performance testing

 

with Apache Benchmark container ajnouri/ab, we can generate client request to the load balancer virtual server by its hostname (ajnouri.local).

Let’s open an aux console to the container ajnouri/ab and generate 50.000 connections with 200 concurrent ones to the url ajnouri.local/test.php

ab -n 50000 -c 200 ajnouri.local/test.php

selection_370

 

8. Monitoring load balancing

 

Monitoring the load balancer performance shows a peek of connections corresponding to Apache benchmark generated requests

selection_371


In the upcoming part-2, the 3 web server containers are replaced with a single container in which we can spawn as many servers as we want (Docker-in-Docker) as well as test a custom python client script container that generates http traffic from spoofed IP addresses as opposed to a container  (Apache Benchmark) that generate traffic from a single source IP.

Deploying Cisco traffic generator in GNS3


Goal: Deploy TRex, a realistic Cisco traffic generator, to test devices in GNS3.

TRex traffic generator is a tool designed to benchmark platforms using realistic traffic.
One of the tools through which TRex can be learned and tested is a virtual machine instance, fully simulating TRex without the need for any additional hardware.

The TRex Virtual Machine is based on Oracle’s Virtual Box freeware.
It is designed to enable TRex newbies to explore this tool without any special resources.

Download the virtual appliance ova file: http://trex-tgn.cisco.com/trex/T_Rex_162_VM_Fedora_21.ova

Open the image in VMWare (I am using VMWare workstation)

From GNS3 import the VMWare device:

Edit the VM template and make sure to select “Allow GNS3 to use any configured VMware adapter”

Selection_140

Insert the a device to test, DUT (Device Under Test), in our case it is a Cisco IOU router and build the following topology, in which TRex will play the role of the client and the server for the generated traffic.

Topology

Selection_132

Because TRex doesn’t implement ARP, we have to manually indicate the router MAC addresses of the directly connected interfaces.
You can set TRex to match the DUT MACs or DUT to match the default MAC configured on TRex. We opt for the first solution:

Note the router interface MAC addresses:

Selection_141

Login to TRex through the console:

  • Username: trex
  • Password: trex

and edit Trex configuration file

/etc/trex_cfg.yaml

and change the DUT MACs

Screenshot - 260716 - 23:33:48

Make sure the list of interfaces ids match the ones defined by dpdk_nic_bind.py

cd v1.62

sudo ./dpdk_nic_bind.py –status

Selection_125

We also need to set our router under test with the MAC addersses used by TRex for the traffic.

On the IOU router:

IOU1(config-if)#int e0/0
IOU1(config-if)#ip address 192.168.10.2 255.255.255.0
IOU1(config-if)#du fu
IOU1(config-if)#no sh
IOU1(config-if)#int e0/1
IOU1(config-if)#ip address 192.168.20.2 255.255.255.0
IOU1(config-if)#du fu
IOU1(config-if)#no sh

IOU1(config)#arp 192.168.10.1  0800.2723.21dc ARPA
IOU1(config)#arp 192.168.20.1  0800.2723.21dd ARPA
IOU1(config)#do sh arp
Protocol  Address          Age (min)  Hardware Addr   Type   Interface
Internet  192.168.10.1            –   0800.2723.21dc  ARPA
Internet  192.168.20.1            –   0800.2723.21dd  ARPA
IOU1(config)#

e0/1 and e0/2 IP addresses are configured with 192.168.10.2 and 192.168.20.2. In fact it doesn’t matter for TRex because we have routes to forward traffic out the appropriate interfaces to reach TRex interfaces.

On the router set routes to the emulated client and servers:

ip route 16.0.0.0 255.0.0.0 192.168.10.1
ip route 48.0.0.0 255.0.0.0 192.168.20.1

For this lab we will generate IMIX traffic (64byte UDP packets profile) from emulated clients and servers using virtual IP range configurable in 16.0.0.[1-255] and 48.0.[0.1-255.255]

Back to TRex:

cap2/imix_64.yaml

Selection_154

So let’s configure our router to route traffic destined to previous ranges out the appropriate interfaces.

IOU router:

IOU1(config)#ip route 16.0.0.0 255.0.0.0 192.168.10.1
IOU1(config)#ip route 48.0.0.0 255.0.0.0 192.168.20.1

Start the emulation on Trex:

sudo ./t-rex-64 -f cap2/imix_64.yaml  -d 60 -m 40000  -c 1

Selection_152

You can observe the generated traffic passing through the router with Wireshark

Selection_153

For more inf. please refer to

https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_dns_basic_example

References:

GNS3 + Docker: Internet modem container


Goal: Deploy internet modem for GNS3 topology using Docker container. The container uses iptables to perform NAT (masquerading) and dnsmasq as DHCP server for LAN interfaces.

Used Docker images:

GNS3 host preparation : This is performed on GNS3 linux host

From GNS3 host console, create a tap interface (tap0) and put it along with the physical interface (eth0) in a bridge (ex: ovsbr0):

ip tuntap add dev tap0 mode tap user <username>

sudo ovs-vsctl add-br ovsbr0

sudo ovs-vsctl add-port ovsbr0 tap0

You can use either linux bridge (brctl command) or OpenVswitch bridge (ovs-vsctl command)

sudo ovs-vsctl show

579f91e6-efc3-480b-96f3-b9f21bfbafb4

Bridge “ovsbr0”

Port “tap0”

Interface “tap0”

Port “ovsbr0”

Interface “ovsbr0”

type: internal

Port “eth0”

Interface “eth0”

ovs_version: “2.3.0”

Remove ip address from eth0 (or release dhcp parameters) then reconfigure IP address and default gateway (or request dhcp) for the ovs bridge ovsbr0

Import containers

1- Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers and then “New”.

Choose “New image” option and the name ajnouri/internet

Screenshot - 170716 - 18:49:03

Accept all default parameters.

2- Create a new docker template in GNS3. Create new docker template: Edit > Preferences > Docker > Docker containers and then “New”.

Choose “New image” option and the name gns3/openvswitch

Screenshot - 170716 - 18:49:12

Set the number of interfaces to eight and accept default parameters with “next” until “finish”.

3- Same for end host container. From GNS3, create new docker template Edit > Preferences > Docker > Docker containers and then “New”.

Choose “New image” option and the name gns3/endhost.

Screenshot - 170716 - 18:49:21

Next you can choose a template name for the container, in this case I renamed it as “dvpc”.

Accept default parameters with “next” until “finish”.

GNS3 Topology

Insert a cloud to the topology and map it to tap0

Screenshot - 170716 - 18:49:31

Build the below topology

Screenshot - 170716 - 18:49:40

Configure containers network interfaces:

Internet container ajnouri/Internet-1

Screenshot - 170716 - 18:50:33

End host container dvpc-1

Screenshot - 170716 - 18:50:49

The WAN interface of the Internet container should have been assigned an IP and gateway from your physical network (connected to internet).

Start the nat.sh script from /data directory

You will be asked to set the LAN and WAN interfaces as well as the IP range for dhcp clients connected to LAN interface, then the script will start dnsmasq and set iptables for NAT (masquerade)

ajnouri/internet-1 console

Screenshot - 170716 - 18:51:15

ajnouri/dvpc-1 console

Screenshot - 170716 - 18:51:37

Other dhcp parameters assigned to the client are taken from Internet device WAN interface DHCP parameters.

Connectivity check

Selection_110

Let’s have fun! Now that we have internet connectivity, install a text-based browser package on the end host container

Selection_111

Start elinks and browse Internet

Selection_112

For more comfortable browsing experience, you can use the image gns3/webterm.

Create a new Docker template

Selection_113

Choose vnc as the console type to allow GUI browsing of Firefox

Selection_114

And keep the remaining default parameters.

Insert the image and connect it to the topology as follow:

Selection_115

Set the container interface for dhcp client

Selection_116

Start the stopped containers and console (vnc) to Webterm container.

(gns3/openvswitch doesn’t need any configuration)

Selection_117

You should get this

Selection_118

 

 

 

 

 

DockerVPC: Using containers in GNS3 as Linux Virtual hosts instead of VPCS


More updated content about GNS3 and natively integrated Docker.

Introduction

I would like to share with you DockerVPC, a bash script that helps running containers for use within GNS3 as rich virtual end-host instead of VPCS.

I am using it to avoid dealing directly with docker commands and container id’s each time I would like to rapidly deploy some disposable end-host containers inside GNS3.

For now it runs only on linux platforms.  and tested on Ubuntu, RedHat and OpenSUSE.

Using DockerVPC doesn’t require knowledge of Docker containers, still I encourage you to take a look at this short introduction.

By the way, VIRL in its recent updates introduced lxc containers to simulate Ubuntu server (multiprocess environment) as well as single process container for iperf.

It is possible to implement docker containers on Windows or Mac OS X 
using lightweight boot2docker virtual machine or the newer Docker tool Kitematic,, 
The issue is that, there is no such tool as pipework for windows or Mac to set additional interfaces.
I use this is a temporary solution knowing that, Docker is on the way to 
be integrated to GNS3, until then, you can already take maximum profit 
of containers inside GNS3. (See Issues and limitations below)

The linux image used by DockerVPC is pre-built with the following apps:

  • SSH server.
  • Apache + PHP
  • Ostinato / D-ITG / Iperf.
  • BIRD Internet routing daemon.
  • Linphone / sipp / pjsua. (VoIP host-to-host through GNS3 works perfectly)
  • IPv6 THC tools.
  • VLC (VideoLAN).
  • Qupzilla browser + java & html5 plugins / links.
  • vSFTPd server + ftp client.
  • And many other tools: inetutils-traceroute, iputils-tracepath, mtr..

Which makes it almost a full-fledged Linux host.

dockervpc

By default containers are connected to the host through docker0 bridge, this tool allows you to connect the running containers to GNS3 through additional bridge interfaces so you can bind them to cloud elements in your GNS3 topology. In other words, containers run independently of GNS3. More on that in Simple lab.

Additionally, this script allows you to separately manage additional container images like cacti server or a 16-port (host bridges) OpenVSwitch.

For now, all you have to do is install the required applications and clone the repository

Installing requirements

You will need: git, docker, pipework and lxterminal.

1.git 

sudo apt-get install git

2.Docker easy to install 

docker -v
Docker version 1.8.1, build d12ea79

3.pipework, a simple yet powerful bash script, for advanced docker networking 

sudo bash -c “curl https://raw.githubusercontent.com/jpetazzo/pipework/master/pipework > /usr/local/bin/pipework”
sudo chmod a+x /usr/local/bin/pipework

4.lxterminal 

lxterminal is not required anymore, the script will detect the used terminal and use it to open interactive terminal access to containers.

To use docker as non-root user

sudo usermod -aG docker {user}

Clone DockerVPC repository

git clone https://github.com/AJNOURI/DockerVPC

cd DockerVPC

Here are some examples (on my GNS3 community blog) of how to use DockerVPC container with GNS3.

Once the installation is done and the images pulled, creating virtual end-hosts is a matter of seconds.

DockerVPC labs

Issues and limitations:

  • Originally, docker containers are not meant to run GUI applications, this is a workaround brought by docker community (by mounting docker host X11 and sound devices), so we must expect some issues with that.
  • By default, Docker networking uses a single interface bridged to docker0. So, using additional container interfaces will bring additional complexity to networking configuration.
  • DockerVPC is relying on pipework, an external script for advanced networking. Though this is an advantage comparing to the limited (for now) integrated networking functionalities, it brings new challenges.
  • Bridge interfaces created with pipework do not persist after stopping the container or docker host reboot, so make sure to reconfigure your container networking parameters after you restart a stopped container.

This brings us to the conclusion that using Docker containers this way, it is NOT MEANT FOR PRODUCTION !!!

The purpose of DockerVPC is to hopefully give GNS3 users more flexibility with end-host simulation.

Hope you will find it useful!

AJ

Further readings:

Routing between Docker containers using GNS3.


The idea is to route (IPv4 and IPv6) between Dockers containers using GNS3 and use them as end-hosts instead of Virtual Machines.

Containers use only the resources necessary for the application they run. They use an image of the host file system and can share the same environment (binaries and libraries).

In the other hand, virtual machines require entire OS’s, with reserved RAM and disk space.

Virtual machines vs Docker containers

Virtual machines vs Docker containers

 

If you are not familiar with Docker, I urge you to take a look at the below excellent short introduction and some additional explanation from Docker site. :

 

 

As for now, Docker has limited networking functionalities. This is where pipework comes to the rescue. Pipework allows more advanced networking settings like adding new interfaces, IP’s from a different subnets and set gateways and many more…

To be able to route between the containers using your own GNS3 topology (the sky the limit!), pipework allows to create a new interface inside a running container, connect it to a host bridge interface, give it an IP/mask in any subnet you want and set a default gateway pointing to a device in GNS3. Consequently all egress traffic from the container is routed to your GNS3 topology.

 

GNS3 connection to Docker a container

GNS3 connection to Docker a container

 

How pipework connects exposes container network

How pipework connects exposes container network

Lab requirements:

Docker:
https://docs.docker.com/installation/ubuntulinux/#docker-maintained-package-installation
Pipework:

sudo bash -c "curl https://raw.githubusercontent.com/jpetazzo/pipework/master/pipework\
 > /usr/local/bin/pipework"

For each container, we will generate docker image, run a container with an interactive terminal and set networking parameters (IP and default gateway).

To demonstrate docker flexibility, we will use 4 docker containers with 4 different subnets:

 

 

This is how containers are built for this lab:

 

 .

 .

Here is the general workflow for each container.

1- build image from Dockerfile (https://docs.docker.com/reference/builder/):

An image is readonly.

sudo docker build -t <image-tag> .

Or (docker v1.5) sudo docker build -t <image-tag> <DockerfileLocation>

2- Run the built image:

Spawn and run a writable container with interactive console.

The parameters of this command may differ slightly for each GUI containers.

sudo docker run -t -i <image id from `sudo docker images`> /bin/bash

3- Set container networking:

Create host bridge interface and link to a new interface inside the container, assign to it an IP and a new default gateway.

sudo pipework <bridge> -i <int> <container if from `sudo docker ps`> <ip/mask>@<gateway-ip

 

To avoid manipulating image id’s and container id’s for each of the images and the containers, I use a bash script to build and run all containers automatically:

https://github.com/AJNOURI/Docker-files/blob/master/gns3-docker.sh

 

#!/bin/bash
IMGLIST="$(sudo docker images | grep mybimage | awk '{ print $1; }')"
[[ $IMGLIST =~ "mybimage" ]] && sudo docker build -t mybimage -f phusion-dockerbase .
[[ $IMGLIST =~ "myapache" ]] && sudo docker build -t myapache -f apache-docker .
[[ $IMGLIST =~ "myfirefox" ]] && sudo docker build -t myfirefox -f firefox-docker .

BASE_I1="$(sudo docker images | grep mybimage | awk '{ print $3; }')"
lxterminal -e "sudo docker run -t -i --name baseimage1 $BASE_I1 /bin/bash"
sleep 2
BASE_C1="$(sudo docker ps | grep baseimage1 | awk '{ print $1; }')"
sudo pipework br4 -i eth1 $BASE_C1 192.168.44.1/24@192.168.44.100 

BASE_I2="$(sudo docker images | grep mybimage | awk '{ print $3; }')"
lxterminal -e "sudo docker run -t -i --name baseimage2 $BASE_I2 /bin/bash"
sleep 2
BASE_C2="$(sudo docker ps | grep baseimage2 | awk '{ print $1; }')"
sudo pipework br5 -i eth1 $BASE_C2 192.168.55.1/24@192.168.55.100 

APACHE_I1="$(sudo docker images | grep myapache | awk '{ print $3; }')"
lxterminal -t "Base apache" -e "sudo docker run -t -i --name apache1 $APACHE_I1 /bin/bash"
sleep 2
APACHE_C1="$(sudo docker ps | grep apache1 | awk '{ print $1; }')"
sudo pipework br6 -i eth1 $APACHE_C1 192.168.66.1/24@192.168.66.100 

lxterminal -t "Firefox" -e "sudo docker run -ti --name firefox1 --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix myfirefox"
sleep 2
FIREFOX_C1="$(sudo docker ps | grep firefox1 | awk '{ print $1; }')"
sudo pipework br7 -i eth1 $FIREFOX_C1 192.168.77.1/24@192.168.77.100

 

And we end up with the following conainers:

Containers, images and dependencies.

Containers, images and dependencies.


 

GNS3

All you have to do is to bind a separate cloud to each bridge interface (br4,br5,br6 and br7) created by pipework, and then connect them to the appropriate segment in your topology.

 

Lab topology

Lab topology

Note that GNS3 topology is already configured for IPv6, so as soon as you start the routers, Docker containers will be assigned IPv6 addresses from the routers through SLAAC (Stateles Auto Configuration) which makes them reachable through IPv6.

 

Here is a video on how to launch the lab:


 

Cleaning up

To clean your host from all containers and images use the following bash script:

https://github.com/AJNOURI/Docker-files/blob/master/clean_docker.sh which uses the below docker commands:

Stop running containers:

  • sudo docker stop <container id’s from `sudo docker ps`>

Remove the stopped container:

  • sudo docker rm <container id’s from `sudo docker ps -a`>

Remove the image:

  • sudo docker rmi <image id’s from `sudo docker images`>
sudo ./clean_docker.sh
Stopping all running containers...
bf3d37220391
f8ad6f5c354f
Removing all stopped containers...
bf3d37220391
f8ad6f5c354f
Erasing all images...
Make sure you are generating image from a Dockerfile
or have pushed your images to DockerHub.
*** Do you want to continue? No

I answered “No”, because I still need those images to spawn containers, you can answer “Yes” to the question if you don’t need the images anymore or if you need to change the images.


 

References:

Docker:

pipework for advanced Docker networking:

Running firefox inside Docker container:

Baseimage-Docker:

3D model shipping container:

EIGRP SIA (Stuck-In-Active) through animations.


EIGRP SIA (Stuck-In-Active) process through animations:

“Active” = Actively looking for a route to a network (Successor)

Without SIA

Browse in separate page

With SIA

Browse in separate page

%d bloggers like this: