futurewei-cloud / Distrinet

Distributed Network emulator, based on Mininet
MIT License
3 stars 6 forks source link

Explore the extensibility of District based on Neutron #46

Open jiawei96-liu opened 1 year ago

jiawei96-liu commented 1 year ago
  1. Deploy openstack @jiawei96-liu @gethurb
  2. Sort out Neutron's API according to Alcor's API @jiawei96-liu
  3. Try to set up the compute node of openstack in docker and mirror the container. There may be three ideas:

    a. Convert the image of the deployed compute node in KVM into docker image @jiawei96-liu b. Start with a clean docker container of ubuntu:20.04, deploy and containerize the computing nodes of openstack @jiawei96-liu c. Build compute node based on kolla project @gethurb

  4. Deploy the compute node of openstack in the district
  5. Performance test
jiawei96-liu commented 1 year ago

1. Deploy openstack

We used devstack to build two KVM virtual machines on the local server and deployed the complete openstack. @jiawei96-liu In addition, we also deployed another openstack using kolla. @gethurb

jiawei96-liu commented 1 year ago

2. Sort out Neutron's API

The ways to get openstack token:

a. CLI: openstack token issue b. API (curl, Postman)

Get API port: openstack endpoint list

jiawei96-liu commented 1 year ago

3.a Convert the image of the deployed compute node in KVM into docker image

The openstack compute node's image of KVM is openstack-compute-node.qcow2. Then I did the following operations:

sudo qemu-img convert -f qcow2 -O raw openstack-compute-node.qcow2 openstack-compute-node.raw sudo fdisk -lu openstack-compute-node.raw sudo mkdir openstack-compute-node sudo mount -o loop,rw,offset=1048576 openstack-compute-node.raw openstack-compute-node cd openstack-compute-node tar -zvf openstack-compute-node.tar.gz cd .. sudo umount openstack-compute-node cat openstack-compute-node/openstack-compute-node.tar.gz | sudo docker import -c "EXPOSE 22" - openstack-compute-node

Finally, a docker image named openstack compute node is obtained, but the size of the image is 0. More specifically, the size of .raw file is normal, but the size of .gz file is less than 100k. The reason of the problem is still unclear, ready to try idea 3.b.

jiawei96-liu commented 1 year ago

3.b Start with a clean docker container of ubuntu:20.04, deploy and containerize the computing nodes of openstack

docker pull ubuntu:20.04 docker run -itd --name openstack-compute --privileged=true -h openstack-compute --network=host --cap-add=NET_ADMIN openstack-compute:init --init bash apt update apt install -y iproute2 && apt install -y net-tools && apt install -y iputils-ping && apt install -y sudo && apt install -y vim && apt install -y git && apt install -y python3 sudo apt-get install python3.8-distutils sudo apt install iptables git clone https://github.com/openstack/devstack.git mkdir ~/.pip vim ~/.pip/pip.conf

[global]
index-url = https://mirrors.aliyun.com/pypi/simple/
extra-index-url = https://pypi.tuna.tsinghua.edu.cn/simple
timeout = 60

devstack/tools/create-stack-user.sh mv devstack /opt/stack/devstack chown -R stack:stack /opt/stack/devstack su - stack cd devstack mkdir ~/.pip vim ~/.pip/pip.conf

git branch -a git checkout stable/yoga vim local.conf

#Compute node configuration script
#jiawei.liu/2020.12.14
#MORE HELP:
# https://docs.openstack.org/devstack/latest/configuration.html#local-conf
# https://docs.openstack.org/devstack/latest/guides/neutron.html
# https://docs.openstack.org/zh_CN/install-guide/launch-instance.html#launch-instance-networks
[[local|localrc]]
#===================Parameter Setting===============
CONTROL_IP=172.16.50.90
MY_IP=172.17.0.2
MY_NIC=eth0
#===================Basic Setting===================
#Minimal Configuration
ADMIN_PASSWORD=admin
MYSQL_PASSWORD=admin
RABBIT_PASSWORD=admin
SERVICE_PASSWORD=admin
#Installation Directory
DEST=/opt/stack
#Use TryStack git mirror
GIT_BASE=http://git.trystack.cn
NOVNC_REPO=http://git.trystack.cn/kanaka/noVNC.git
SPICE_REPO=http://git.trystack.cn/git/spice/spice-html5.git
USE_PYTHON3=True
#By default stack.sh only clones the project repos if they do not exist in $DEST. 
#stack.sh will freshen each repo on each run if RECLONE is set to yes. This avoids 
#having to manually remove repos in order to get the current branch from $GIT_BASE.
RECLONE=no
#By default stack.sh only installs Python packages if no version is currently 
#installed or the current version does not match a specified requirement. 
#If PIP_UPGRADE is set to True then existing required Python packages will be 
#upgraded to the most recent version that matches requirements
PIP_UPGRADE=True
#The Identity API v2 is deprecated as of Mitaka and it is recommended to 
#only use the v3 API. It is possible to setup keystone without v2 API, by doing:
ENABLE_IDENTITY_V2=False
#Database type
DATABASE_TYPE=mysql
#========================IP=========================
#IP_VERSION can be used to configure Neutron to create either an IPv4, IPv6, 
#or dual-stack self-service project data-network by with either IP_VERSION=4, 
#IP_VERSION=6, or IP_VERSION=4+6 respectively.
IP_VERSION=4
#DevStack can enable service operation over either IPv4 or IPv6 by setting 
#SERVICE_IP_VERSION to either SERVICE_IP_VERSION=4 or SERVICE_IP_VERSION=6 respectively.
#hen set to 4 devstack services will open listen sockets on 0.0.0.0 and 
#service endpoints will be registered using HOST_IP as the address.
#When set to 6 devstack services will open listen sockets on :: and service 
#endpoints will be registered using HOST_IPV6 as the address.
SERVICE_IP_VERSION=4
# ``HOST_IP`` and ``HOST_IPV6`` should be set manually for best results if
# the NIC configuration of the host is unusual, i.e. ``eth1`` has the default
# route but ``eth0`` is the public interface.  They are auto-detected in
# ``stack.sh`` but often is indeterminate on later runs due to the IP moving
# from an Ethernet interface to a bridge on the host. Setting it here also
# makes it available for ``openrc`` to include when setting ``OS_AUTH_URL``.
# Neither is set by default.
HOST_IP=$MY_IP
#HOST_IPV6=2001:db8::7
SERVICE_HOST=$CONTROL_IP
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292
#======================Logging=======================
#By default stack.sh output is only written to the console where it runs. 
#It can be sent to a file in addition to the console by setting LOGFILE 
#to the fully-qualified name of the destination log file.
#A timestamp will be appended to the given filename for each run of stack.sh.
LOGFILE=$DEST/logs/stack.sh.log
#Old log files are cleaned automatically if LOGDAYS is set to 
#the number of days of old log files to keep.
LOGDAYS=1
#Some coloring is used during the DevStack runs to 
#make it easier to see what is going on. 
LOG_COLOR=True
#When using the logfile, by default logs are sent to the console and the file. 
#You can set VERBOSE to false if you only wish the logs to be sent to the file 
#(this may avoid having double-logging in some cases where you are capturing 
#the script output and the log files). If VERBOSE is true you can additionally 
#set VERBOSE_NO_TIMESTAMP to avoid timestamps being added to each output line 
#sent to the console.
VERBOSE=True
VERBOSE_NO_TIMESTAMP=True
#======================Image=======================
#Default guest-images are predefined for each type of hypervisor and their 
#testing-requirements in stack.sh. Setting DOWNLOAD_DEFAULT_IMAGES=False 
#will prevent DevStack downloading these default images; 
DOWNLOAD_DEFAULT_IMAGES=False
IMAGE_URLS="http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img"
#======================Neutron=====================
#Neutron options
#NEUTRON_CREATE_INITIAL_NETWORKS=False
#flat_interface and public_interface
#https://www.cnblogs.com/IvanChen/p/4489406.html
FLAT_INTERFACE=$MY_NIC
PUBLIC_INTERFACE=$MY_NIC
#======================Service=====================
ENABLED_SERVICES=n-cpu,rabbit,q-agt,placement-client,n-api-meta,n-novnc,c-vol
#======================Console=====================
NOVA_VNC_ENABLED=True
NOVNCPROXY_URL="http://$SERVICE_HOST:6080/vnc_auto.html"
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
VNCSERVER_LISTEN=$HOST_IP
##END

FORCE=yes ./stack.sh

ERROR:

+./stack.sh:main:804                       sudo systemctl restart systemd-journald
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
++./stack.sh:main:804                       err_trap
++./stack.sh:err_trap:562                   local r=1
cj-chung commented 1 year ago

Please refer to these materials:

  1. https://github.com/int32bit/docker-nova-compute
  2. https://www.openstack.org/videos/summits/barcelona-2016/dockerizing-the-hard-services-neutron-and-nova
jiawei96-liu commented 1 year ago

nova-compute-container build process(Host Network,provider network)

  1. Docker installation https://www.runoob.com/docker/ubuntu-docker-install.html

  2. Docker file:

    
    FROM ubuntu:22.04
    # 修改docker时区为北京时间
    ENV TZ=Asia/Shanghai
    RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

解决invoke-rc.d: policy-rc.d denied execution of start.

RUN printf '#!/bin/sh\nexit 0' > /usr/sbin/policy-rc.d

解决invoke-rc.d: could not determine current runlevel

ENV RUNLEVEL=1

解决debconf: delaying package configuration, since apt-utils is not installed

解决debconf: unable to initialize frontend: Dialog

ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update && \ apt-get install --assume-yes apt-utils

RUN apt-get update && \ apt-get -y install sudo dialog RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections

安装nova-compute

RUN apt-get update && \ apt-get install -y nova-compute

安装一些其他工具

RUN apt install -y net-tools && apt install -y vim && apt install -y iputils-ping

2. Docker Image Build:

docker build -t cnimage:v1 .

3. Add a NIC
> a. Replace apt source list

sudo sed -i "s@http://.*archive.ubuntu.com@http://mirrors.aliyun.com@g" /etc/apt/sources.list sudo sed -i "s@http://.*security.ubuntu.com@http://mirrors.aliyun.com@g" /etc/apt/sources.list

> b. Install tunctl tools

sudo apt update sudo apt install uml-utilities

> c. Create a tap device

root@compute:~# tunctl -t tap0-cn2 Set 'tap0-cn2' persistent and owned by uid 0 root@compute:~# sudo ip link set tap0-cn2 up root@compute:~# ifconfig tap0-cn2 172.16.62.7/24

root@compute:~# ifconfig -a ... tap0-cn2: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.16.62.7 netmask 255.255.255.0 broadcast 172.16.62.255 ether 92:c4:5d:b6:cb:22 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ...

4. Run a container

root@compute:~/compute-docker# docker images REPOSITORY TAG IMAGE ID CREATED SIZE cnimage v1 2625a89b0c45 13 seconds ago 1.13GB ... root@controller:~# docker run -itd --name cn2 --privileged=true --net=host -h cn2 --init cnimage:v1 bash 364898c557d4b520e61602ed22a68fa2d079296bb7d6d86faf8a6a6517823cb3 root@controller:~# docker exec -it cn2 bash

5. Config the nova-compute.service 
https://docs.openstack.org/nova/yoga/install/compute-install-ubuntu.html
> a. nova.conf

root@cn1:~# cat /etc/nova/nova.conf | grep -Ev '#|^$' [DEFAULT] log_dir = /var/log/nova lock_path = /var/lock/nova state_path = /var/lib/nova transport_url = rabbit://openstack:sdn123456@controller my_ip = 172.16.62.7 compute_driver = fake.FakeDriver [api] auth_strategy = keystone [api_database] connection = sqlite:////var/lib/nova/nova_api.sqlite [barbican] [barbican_service_user] [cache] [cinder] [compute] [conductor] [console] [consoleauth] [cors] [cyborg] [database] connection = sqlite:////var/lib/nova/nova.sqlite [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [image_cache] [ironic] [key_manager] [keystone] [keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = sdn123456 [libvirt] [metrics] [mks] [neutron] [notifications] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_middleware] [oslo_policy] [oslo_reports] [pci] [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = sdn123456 [powervm] [privsep] [profiler] [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [workarounds] [wsgi] [zvm] [cells] enable = False [os_region_name] openstack =

> b. nova-compute.conf

root@cn1:~# cat /etc/nova/nova-compute.conf [DEFAULT] compute_driver=fake.FakeDriver

6. Restart nova-compute.service 

root@cn2:/# service nova-compute restart

jiawei96-liu commented 1 year ago

How to use FakeDriver in openstack

  1. compute node and controller node
    vim /etc/nova/nova.conf
    add compute_driver = fake.FakeDriver in [DEFAULT] segment

  2. compute node and controller node
    vim /etc/nova/nova-compute.conf
    update compute_driver=fake.FakeDriver and delete [libvirt] zone

  3. restart service

    service nova-compute restart
gethurb commented 1 year ago

docker test network create steps: let's say we have tenant network interface named etho1 with ip 172.16.62.8 , first flush ip of eth0 ip addr flush eth0

then create a new docker network which uses tenant subnet and assign eth0 before ip to this bridge docker network create --subnet 172.16.62.0/24 --gateway 172.16.62.8 tenant_network And you will find a new bridge created by docker named as br-{docker network_id}

add etho0 to this bridge brctl addif br-{docker network_id} eth0 if everything is ok, you now can ping other address in the subnet use this new docker bridge

After this, you should be able to ping each other between controller and host which docker containers located using new docker tenant network bridge. Then just create containers attach to this network

docker run -itd --name cn1 -h cn1 --net tenant_network --ip 172.16.62.150 --privileged=true --init cnimage:v1 bash Last step, confirm you can ping each other between controller and these docker containers

jiawei96-liu commented 1 year ago

Nova-compute + OVS-agent container build process(Non-Host Network, self-service network)

1. Preparation

Another physical host with three NICs (except controller and compute node)

NIC1(enp1s0f1) : 172.16.41.0/24 [gw:172.16.41.254] network41 provider-network NIC2(enp2s0) : 172.16.62.0/24 [gw:172.16.62.254] network62 underlay-network NIC3(enp1s0f0): 192.168.123.0/24 [gw:192.168.123.1] network123 manager-network a. Docker installation https://www.runoob.com/docker/ubuntu-docker-install.html

c. Pull Docker Image:

Note: The openstack configuration file in the container has been configured according to the official installation document (self-service network mode), and we have replaced the driver of openstack with the fake driver, and rewritten the code of the fake driver to support the operation of simulating the VM from the namespace inside the container.

docker pull jiawei96liu/cnimage:v3

d. Network preparation

network123


root@compute-node3:~# ifconfig
enp1s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 192.168.123.14  netmask 255.255.255.0  broadcast 192.168.123.255
inet6 fe80::2e53:4aff:fe09:a40a  prefixlen 64  scopeid 0x20<link>
ether 2c:53:4a:09:a4:0a  txqueuelen 1000  (Ethernet)
RX packets 30  bytes 11568 (11.5 KB)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 54  bytes 6782 (6.7 KB)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
device memory 0xb10e0000-b10fffff

enp1s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.41.14 netmask 255.255.255.0 broadcast 172.16.41.255 inet6 fe80::2e53:4aff:fe09:a40b prefixlen 64 scopeid 0x20 ether 2c:53:4a:09:a4:0b txqueuelen 1000 (Ethernet) RX packets 61 bytes 7464 (7.4 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 101 bytes 20855 (20.8 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xb10c0000-b10dffff

enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.62.14 netmask 255.255.255.0 broadcast 172.16.62.255 inet6 fe80::a6ae:12ff:fe79:c981 prefixlen 64 scopeid 0x20 ether a4:ae:12:79:c9:81 txqueuelen 1000 (Ethernet) RX packets 6 bytes 692 (692.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 39 bytes 4894 (4.8 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1000 (Local Loopback) RX packets 23 bytes 2918 (2.9 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 23 bytes 2918 (2.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

root@compute-node3:~# ip addr flush enp1s0f0 root@compute-node3:~# docker network create --subnet 192.168.123.0/24 --gateway 192.168.123.14 network123 7b4a764add84229815716d261059a3f6020e3d60492de187a358fd773af62136 root@compute-node3:~# ifconfig br-cad82d0389d8: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.123.14 netmask 255.255.255.0 broadcast 192.168.123.255 ether 02:42:80:e6:6a:1d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:c1:bd:f9:c0 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp1s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 2c:53:4a:09:a4:0a txqueuelen 1000 (Ethernet) RX packets 101 bytes 22695 (22.6 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 159 bytes 16887 (16.8 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xb10e0000-b10fffff

enp1s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.41.14 netmask 255.255.255.0 broadcast 172.16.41.255 inet6 fe80::2e53:4aff:fe09:a40b prefixlen 64 scopeid 0x20 ether 2c:53:4a:09:a4:0b txqueuelen 1000 (Ethernet) RX packets 1588 bytes 611795 (611.7 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1281 bytes 379381 (379.3 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xb10c0000-b10dffff

enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.62.14 netmask 255.255.255.0 broadcast 172.16.62.255 inet6 fe80::a6ae:12ff:fe79:c981 prefixlen 64 scopeid 0x20 ether a4:ae:12:79:c9:81 txqueuelen 1000 (Ethernet) RX packets 346 bytes 41767 (41.7 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 283 bytes 42194 (42.1 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1000 (Local Loopback) RX packets 235 bytes 23890 (23.8 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 235 bytes 23890 (23.8 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

root@compute-node3:~# brctl addif br-cad82d0389d8 enp1s0f0 root@compute-node3:~# ping 192.168.123.1 -I br-cad82d0389d8 PING 192.168.123.1 (192.168.123.1) from 192.168.123.15 br-7b4a764add84: 56(84) bytes of data. 64 bytes from 192.168.123.1: icmp_seq=1 ttl=64 time=0.983 ms 64 bytes from 192.168.123.1: icmp_seq=2 ttl=64 time=0.674 ms ^C --- 192.168.123.1 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.674/0.828/0.983/0.154 ms root@compute-node3:~# ping 192.168.123.11 -I br-7b4a764add84 PING 192.168.123.11 (192.168.123.11) from 192.168.123.15 br-7b4a764add84: 56(84) bytes of data. 64 bytes from 192.168.123.11: icmp_seq=1 ttl=64 time=0.698 ms 64 bytes from 192.168.123.11: icmp_seq=2 ttl=64 time=0.278 ms ^C --- 192.168.123.11 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1019ms rtt min/avg/max/mdev = 0.278/0.488/0.698/0.210 ms root@compute-node3:~#

>>**network41**

root@compute-node3:~# ip addr flush enp1s0f1 root@compute-node3:~# docker network create --subnet 172.16.41.0/24 --gateway 172.16.41.14 network41 00fc4f9af6a5681c68c4626eb7366674d1e4585755387cfceadef0e6097a6ce1 root@compute-node3:~# ifconfig br-cad82d0389d8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.123.14 netmask 255.255.255.0 broadcast 192.168.123.255 inet6 fe80::42:80ff:fee6:6a1d prefixlen 64 scopeid 0x20 ether 02:42:80:e6:6a:1d txqueuelen 0 (Ethernet) RX packets 3206 bytes 861978 (861.9 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 63 bytes 8390 (8.3 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

br-f33b2261bc95: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.16.41.14 netmask 255.255.255.0 broadcast 172.16.41.255 ether 02:42:11:02:0b:bd txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:c1:bd:f9:c0 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp1s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 2c:53:4a:09:a4:0a txqueuelen 1000 (Ethernet) RX packets 3958 bytes 973788 (973.7 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 222 bytes 25277 (25.2 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xb10e0000-b10fffff

enp1s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 2c:53:4a:09:a4:0b txqueuelen 1000 (Ethernet) RX packets 59829 bytes 5766471 (5.7 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1563 bytes 451977 (451.9 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xb10c0000-b10dffff

enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.62.14 netmask 255.255.255.0 broadcast 172.16.62.255 inet6 fe80::a6ae:12ff:fe79:c981 prefixlen 64 scopeid 0x20 ether a4:ae:12:79:c9:81 txqueuelen 1000 (Ethernet) RX packets 4324 bytes 1003815 (1.0 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 485 bytes 69296 (69.2 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 1000 (Local Loopback) RX packets 245 bytes 24620 (24.6 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 245 bytes 24620 (24.6 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

root@compute-node3:~# brctl addif br-f33b2261bc95 enp1s0f1 root@compute-node3:~# ping 172.16.41.11 -I br-f33b2261bc95 PING 172.16.41.11 (172.16.41.11) from 172.16.41.14 br-f33b2261bc95: 56(84) bytes of data. 64 bytes from 172.16.41.11: icmp_seq=1 ttl=64 time=0.497 ms 64 bytes from 172.16.41.11: icmp_seq=2 ttl=64 time=0.588 ms ^C --- 172.16.41.11 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1022ms rtt min/avg/max/mdev = 0.497/0.542/0.588/0.045 ms root@compute-node3:~#

>>**network62**

root@compute-node3:~# ip addr flush enp2s0 root@compute-node3:~# docker network create --subnet 172.16.62.0/24 --gateway 172.16.62.14 network62 root@compute-node3:~# docker network list NETWORK ID NAME DRIVER SCOPE 9d18739b8491 bridge bridge local 882c9fbf2903 host host local f33b2261bc95 network41 bridge local 01c752bd6d51 network62 bridge local cad82d0389d8 network123 bridge local ca861251ab72 none null local root@compute-node3:~# brctl addif br-01c752bd6d51 enp2s0 root@compute-node3:~# ping 172.16.62.11 -I br-01c752bd6d51 PING 172.16.62.11 (172.16.62.11) from 172.16.62.14 br-01c752bd6d51: 56(84) bytes of data. 64 bytes from 172.16.62.11: icmp_seq=1 ttl=64 time=1.93 ms 64 bytes from 172.16.62.11: icmp_seq=2 ttl=64 time=2.26 ms ^C --- 172.16.62.11 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 1.925/2.090/2.256/0.165 ms root@compute-node3:~#


####  2. Run a container
> Note:
> CAP :  NET_ADMIN 允许执行网络管理任务
> CAP :  SYS_MODULE 允许插入和删除内核模块
> CAP:   SYS_NICE 允许提升优先级及设置其他进程的优先级

root@compute-node3:~# docker run -itd --name cn3 -h cn3 --privileged=true --init --cap-add=NET_ADMIN --cap-add=SYS_MODULE --cap-add=SYS_NICE jiawei96liu/cnimage:v3 bash 29fefdd17c62aaf75c6d6fa244019d7b2c474f7f540ff18451aac6da36cb785c

root@compute-node3:~# docker network connect --ip 192.168.123.153 network123 cn3 root@compute-node3:~# docker network connect --ip 172.16.41.153 network41 cn3 root@compute-node3:~# docker network connect --ip 172.16.62.153 network62 cn3

####  3. Configure the container

> Configure the route tables

root@compute-node3:~# docker exec -it cn3 bash root@cn3:/# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.17.0.4 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:ac:11:00:04 txqueuelen 0 (Ethernet) RX packets 30 bytes 3484 (3.4 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.123.153 netmask 255.255.255.0 broadcast 192.168.123.255 ether 02:42:c0:a8:7b:99 txqueuelen 0 (Ethernet) RX packets 33 bytes 5559 (5.5 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.41.153 netmask 255.255.255.0 broadcast 172.16.41.255 ether 02:42:ac:10:29:99 txqueuelen 0 (Ethernet) RX packets 194 bytes 88890 (88.8 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

eth3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.16.62.153 netmask 255.255.255.0 broadcast 172.16.62.255 ether 02:42:ac:10:3e:99 txqueuelen 0 (Ethernet) RX packets 27 bytes 4126 (4.1 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

root@cn3:/# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0 172.16.41.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 172.16.62.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.123.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 root@cn3:/# route add default gw 172.16.41.254 root@cn3:/# route add default gw 172.16.62.254 root@cn3:/# route add default gw 192.168.123.1 root@cn3:/# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.123.1 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 172.16.62.254 0.0.0.0 UG 0 0 0 eth3 0.0.0.0 172.16.41.254 0.0.0.0 UG 0 0 0 eth2 0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0 172.16.41.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 172.16.62.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.123.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1

> b. Configure the Hosts

root@cn3:/# echo "192.168.123.11 controller" >> /etc/hosts root@cn3:/# cat /etc/hosts 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.4 cn3 192.168.123.153 cn3 172.16.41.153 cn3 172.16.62.153 cn3 192.168.123.11 controller

> c. Restart  chrony (NTP) , ovsdb-server, ovs-vswitchd

root@cn3:/# service chrony restart

root@cn3:/# ovs-vsctl show ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) root@cn3:/# /usr/share/openvswitch/scripts/ovs-ctl restart

4. Nova deployment

a. Reconfigure nova.conf and restart nova-compute Note: /etc/nova/nova.conf and /etc/nova/nova-compute.conf have been configured correctly. You just need modify myip in /etc/nova/nova.conf to container‘s IP of network123. Then, restart the nova-compute service. More detail in link: https://docs.openstack.org/nova/yoga/install/compute-install-ubuntu.html and https://docs.openstack.org/neutron/yoga/install/compute-install-ubuntu.html

root@cn3:/# vim /etc/nova/nova.conf
[DEFAULT]
log_dir = /var/log/nova
lock_path = /var/lock/nova
state_path = /var/lib/nova
transport_url = rabbit://openstack:sdn123456@controller
my_ip = 192.168.123.153
....

root@cn3:/# service nova-compute restart
 * Restarting OpenStack Compute nova-compute                                                                                                                                                                                             start-stop-daemon: warning: failed to kill 2489: No such process
                                                                                                                                                                                                                                  [ OK ]
root@cn3:/# service nova-compute restart
 * Restarting OpenStack Compute nova-compute                                                                                                                                                                                      [ OK ]
root@cn3:/#

b. Verify On openstack controller node

root@controller-node:/home/sdn# . admin-openrc
root@controller-node:/home/sdn# openstack compute service list --service nova-compute
+--------------------------------------+--------------+----------------+------+---------+-------+----------------------------+
| ID                                   | Binary       | Host           | Zone | Status  | State | Updated At                 |
+--------------------------------------+--------------+----------------+------+---------+-------+----------------------------+
| 59a35d11-09bb-47cd-bd45-0aa84d6f6ba7 | nova-compute | compute-node   | nova | enabled | up    | 2022-09-11T07:05:41.000000 |
| 045c2eca-f112-46d2-97b3-9cdc004dc22b | nova-compute | compute-node-2 | nova | enabled | up    | 2022-09-11T07:05:43.000000 |
| 817003cc-992d-44a9-8ebc-b846456f6e4e | nova-compute | cn1            | nova | enabled | up    | 2022-09-11T07:05:36.000000 |
| a189e111-6d50-4402-94e6-16fca482d291 | nova-compute | cn2            | nova | enabled | up    | 2022-09-11T07:05:43.000000 |
| da645197-9c5d-43dd-b3a9-d33f50ba7a18 | nova-compute | cn3            | nova | enabled | up    | 2022-09-11T07:05:38.000000 |
+--------------------------------------+--------------+----------------+------+---------+-------+----------------------------+
root@controller-node:/home/sdn# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code.
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 65b14135-f1aa-42ba-a33f-6e278efac2b7
Found 0 unmapped computes in cell: 65b14135-f1aa-42ba-a33f-6e278efac2b7
root@controller-node:/home/sdn# openstack hypervisor list
+----+---------------------+-----------------+-----------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP         | State |
+----+---------------------+-----------------+-----------------+-------+
|  1 | compute-node        | fake            | 192.168.123.12  | up    |
|  3 | compute-node-2      | fake            | 192.168.123.13  | up    |
|  4 | cn1                 | fake            | 192.168.123.151 | up    |
|  5 | cn2                 | fake            | 192.168.123.152 | up    |
|  6 | cn3                 | fake            | 192.168.123.153 | up    |
+----+---------------------+-----------------+-----------------+-------+
root@controller-node:/home/sdn#
  1. Neutron deployment

    a. Reconfigure nova.conf and restart nova-compute Note: /etc/neutron/neutron.conf , /etc/neutron/plugins/ml2/linuxbridge_agent.ini , /etc/neutron/plugins/ml2/ml2_conf.ini, /etc/neutron/l3_agent.ini, /etc/neutron/dhcp_agent.ini, /etc/neutron/plugins/ml2/openvswitch_agent.ini have been configured correctly. You need modify local_ip in /etc/neutron/plugins/ml2/linuxbridge_agent.ini and /etc/neutron/plugins/ml2/openvswitch_agent.ini. Then, restart all opensatck service. More detail in link: https://docs.openstack.org/neutron/yoga/install/compute-install-ubuntu.html

    
    root@cn3:/# vim  /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    ....
    [vxlan]
    enable_vxlan = true
    local_ip = 172.16.62.153
    l2_population = true
    ....

root@cn3:/# vim /etc/neutron/plugins/ml2/openvswitch_agent.ini .... [ovs] integration_bridge = br-int tunnel_bridge = br-tun local_ip = 172.16.62.153

bridge_mappings = br-ex

bridge_mappings = provider:br-ex

bridge_mappings =

....

root@cn3:/# service neutron-linuxbridge-agent restart

On openstack controller node

root@controller-node:/home/sdn# openstack network agent list
+--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host            | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+
| 0345f840-7f0e-4d75-bfae-86b5aabc4019 | Linux bridge agent | cn3             | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 0a9f4622-dab2-4a48-851a-a82ca4750ffe | Open vSwitch agent | compute-node-2  | None              | :-)   | UP    | neutron-openvswitch-agent |
| 0f7da629-404e-47aa-b393-192e4e9cfd94 | DHCP agent         | compute-node-2  | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 139390da-aeb8-47a3-b9cb-7d013f1ef5f5 | Open vSwitch agent | cn1             | None              | :-)   | UP    | neutron-openvswitch-agent |
| 22e7648a-9883-4de1-9671-6c874287b9d6 | Linux bridge agent | compute-node-2  | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 3c41e2b1-8245-4ef7-b1df-131dddb4caaf | Linux bridge agent | cn2             | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 52c30355-b4cc-46ec-bfd0-5e2fc14ab8ea | L3 agent           | compute-node    | nova              | :-)   | UP    | neutron-l3-agent          |
| 530c32ab-817a-4080-af19-b34fbc3f778d | L3 agent           | cn3             | nova              | :-)   | UP    | neutron-l3-agent          |
| 54fcb2ee-f137-4b1c-8264-ef134a2d0930 | Metadata agent     | controller-node | None              | :-)   | UP    | neutron-metadata-agent    |
| 5541470f-1e84-45de-be7a-c4ab0ec26802 | Open vSwitch agent | cn3             | None              | :-)   | UP    | neutron-openvswitch-agent |
| 55f152cb-3f26-4936-ab39-5c614c8b7649 | L3 agent           | cn2             | nova              | :-)   | UP    | neutron-l3-agent          |
| 5baf9988-f775-426c-9657-a662bd17dbd6 | DHCP agent         | controller-node | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 5e38fffa-9149-41bf-bee4-e15ec2db9231 | Metadata agent     | compute-node    | None              | :-)   | UP    | neutron-metadata-agent    |
| 6210ac58-97b5-42b8-a4d9-264fe114d725 | DHCP agent         | cn2             | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 631c7571-491e-4e8a-ad58-1613a94ace8c | Metadata agent     | cn1             | None              | :-)   | UP    | neutron-metadata-agent    |
| 6d97512a-d91d-42b0-b18e-2658f80ae86e | DHCP agent         | cn3             | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 6f2cf629-6225-46e2-bbaf-7f8d8e56328a | Linux bridge agent | compute-node    | None              | XXX   | UP    | neutron-linuxbridge-agent |
| 89c421b9-db58-4366-b2d4-708bab630503 | Metadata agent     | compute-node-2  | None              | :-)   | UP    | neutron-metadata-agent    |
| 9c3df384-2ce5-47db-8765-6c445360dcbc | DHCP agent         | cn1             | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 9c444591-7a52-4649-8e98-4e5a465f5749 | Metadata agent     | cn2             | None              | :-)   | UP    | neutron-metadata-agent    |
| a5e9c285-acf9-474a-af15-ba76ed085717 | Linux bridge agent | cn1             | None              | :-)   | UP    | neutron-linuxbridge-agent |
| aaee487f-fd7c-4e4c-942c-78ec28d47e9b | Open vSwitch agent | cn2             | None              | :-)   | UP    | neutron-openvswitch-agent |
| bf430e06-8e3d-4aa1-b204-5a898d3cd0b5 | Open vSwitch agent | controller-node | None              | :-)   | UP    | neutron-openvswitch-agent |
| c53d9f41-3d10-494b-8746-7297f359aedb | L3 agent           | cn1             | nova              | :-)   | UP    | neutron-l3-agent          |
| db6222a2-f2e3-448c-a0ac-85385cc73f62 | DHCP agent         | compute-node    | nova              | :-)   | UP    | neutron-dhcp-agent        |
| ddbeeae6-448f-41d4-a6d4-7cf27289516e | Open vSwitch agent | compute-node    | None              | :-)   | UP    | neutron-openvswitch-agent |
| e12fd22f-1d9b-45c1-bd9b-e07455b1d512 | L3 agent           | compute-node-2  | nova              | :-)   | UP    | neutron-l3-agent          |
| e4b35666-ce63-417c-9597-28afa8495f0e | Linux bridge agent | controller-node | None              | XXX   | UP    | neutron-linuxbridge-agent |
| e5604590-da1d-43a5-9663-c84ec2362c69 | Metadata agent     | cn3             | None              | :-)   | UP    | neutron-metadata-agent    |
| f2bb2761-9a86-4b0d-88e0-95bc6305c566 | L3 agent           | controller-node | nova              | :-)   | UP    | neutron-l3-agent          |
+--------------------------------------+--------------------+-----------------+-------------------+-------+-------+---------------------------+

root@controller-node:/home/sdn# openstack server create --flavor m1.nano --image cirros --nic net-id=self-network-1  --availability-zone nova:cn3:cn3 ljw-cn3-net1-1
+-------------------------------------+-----------------------------------------------+
| Field                               | Value                                         |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                        |
| OS-EXT-AZ:availability_zone         | nova                                          |
| OS-EXT-SRV-ATTR:host                | None                                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                          |
| OS-EXT-SRV-ATTR:instance_name       |                                               |
| OS-EXT-STS:power_state              | NOSTATE                                       |
| OS-EXT-STS:task_state               | scheduling                                    |
| OS-EXT-STS:vm_state                 | building                                      |
| OS-SRV-USG:launched_at              | None                                          |
| OS-SRV-USG:terminated_at            | None                                          |
| accessIPv4                          |                                               |
| accessIPv6                          |                                               |
| addresses                           |                                               |
| adminPass                           | TUB2mCzpzQP8                                  |
| config_drive                        |                                               |
| created                             | 2022-09-11T07:27:10Z                          |
| flavor                              | m1.nano (0)                                   |
| hostId                              |                                               |
| id                                  | bfacf7e5-3b80-48a2-a85e-772a7c2382b1          |
| image                               | cirros (a78157b3-ccf2-4f98-a481-8f7686be3713) |
| key_name                            | None                                          |
| name                                | ljw-cn3-net1-1                                |
| progress                            | 0                                             |
| project_id                          | 925d4171701b4300b9f0c4a467921a42              |
| properties                          |                                               |
| security_groups                     | name='default'                                |
| status                              | BUILD                                         |
| updated                             | 2022-09-11T07:27:10Z                          |
| user_id                             | 8472f217119a4075902e19f286a2fb17              |
| volumes_attached                    |                                               |
+-------------------------------------+-----------------------------------------------+
root@controller-node:/home/sdn# openstack server show ljw-cn3-net1-1
+-------------------------------------+----------------------------------------------------------+
| Field                               | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                                   |
| OS-EXT-AZ:availability_zone         | nova                                                     |
| OS-EXT-SRV-ATTR:host                | cn3                                                      |
| OS-EXT-SRV-ATTR:hypervisor_hostname | cn3                                                      |
| OS-EXT-SRV-ATTR:instance_name       | instance-0000003c                                        |
| OS-EXT-STS:power_state              | Running                                                  |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-STS:vm_state                 | active                                                   |
| OS-SRV-USG:launched_at              | 2022-09-11T07:27:13.000000                               |
| OS-SRV-USG:terminated_at            | None                                                     |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| addresses                           | self-network-1=10.10.1.21                                |
| config_drive                        |                                                          |
| created                             | 2022-09-11T07:27:10Z                                     |
| flavor                              | m1.nano (0)                                              |
| hostId                              | a9cc1d5036857b724fab8788bfc6544f3369b29a416f3541dce69516 |
| id                                  | bfacf7e5-3b80-48a2-a85e-772a7c2382b1                     |
| image                               | cirros (a78157b3-ccf2-4f98-a481-8f7686be3713)            |
| key_name                            | None                                                     |
| name                                | ljw-cn3-net1-1                                           |
| progress                            | 0                                                        |
| project_id                          | 925d4171701b4300b9f0c4a467921a42                         |
| properties                          |                                                          |
| security_groups                     | name='default'                                           |
| status                              | ACTIVE                                                   |
| updated                             | 2022-09-11T07:27:13Z                                     |
| user_id                             | 8472f217119a4075902e19f286a2fb17                         |
| volumes_attached                    |                                                          |
+-------------------------------------+----------------------------------------------------------+

On openstack docker container cn3:

root@cn3:/# ip netns list
fake-3f2fc976-31c4-4832-a1a3-891ab4271906
fake-4ad4cf22-55ac-46f7-a122-906e8c9164c7
fake-bfacf7e5-3b80-48a2-a85e-772a7c2382b1 (id: 1)
root@cn3:/# ip netns exec fake-bfacf7e5-3b80-48a2-a85e-772a7c2382b1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
11: tap6ace6044-65: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether fa:16:3e:4c:81:3b brd ff:ff:ff:ff:ff:ff
    inet 10.10.1.21/24 scope global tap6ace6044-65
       valid_lft forever preferred_lft forever
    inet6 fe80::48dd:94ff:fe18:8db4/64 scope link
       valid_lft forever preferred_lft forever
root@cn3:/#
root@cn3:/# ip netns exec fake-bfacf7e5-3b80-48a2-a85e-772a7c2382b1 route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.10.1.1       0.0.0.0         UG    0      0        0 tap6ace6044-65
10.10.1.0       0.0.0.0         255.255.255.0   U     0      0        0 tap6ace6044-65
root@cn3:/# ip netns exec fake-bfacf7e5-3b80-48a2-a85e-772a7c2382b1 ping 10.10.2.1
PING 10.10.2.1 (10.10.2.1) 56(84) bytes of data.
64 bytes from 10.10.2.1: icmp_seq=1 ttl=63 time=4.35 ms
64 bytes from 10.10.2.1: icmp_seq=2 ttl=63 time=2.23 ms
^C
--- 10.10.2.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1006ms
rtt min/avg/max/mdev = 2.228/3.290/4.353/1.062 ms
root@cn3:/#