atlanticwave-sdx / atlanticwave-proto

Repo for work on prototype for AtlanticWave/SDX
5 stars 6 forks source link

Deploy production AtlanticWave-SDX controllers #153

Open mcevik0 opened 4 years ago

mcevik0 commented 4 years ago

We will run the controllers for production switches (FIU, SoX, Chile) at RENCI and provision VMs. This will require some work to bring the in-band management VLAN all the way to RENCI, but if it does not work well, this setup will still be useful for the monitoring part and testing the automation.

VM specs:

Hi Chris ,

We will make a deployment for the AtlanticWave-SDX project and we need 5 VMs on the VMware Cluster. 

Would it be possible to create VMs with the following specs ? 
Just creation of the VMs is sufficient and I will be able to install the OS. 
We will prefer using DNS names from renci.org domain for these VMs. 

Virtual Machine Specs:
-------------------------------
- VM-1 name: aw-sdx-controller.renci.org
- VM-2 name: aw-sdx-lc-1.renci.org
- VM-3 name: aw-sdx-lc-2.renci.org
- VM-4 name: aw-sdx-lc-3.renci.org
- VM-5 name: aw-sdx-monitor.renci.org

- OS: CentOS 7 (64bit) 
- 2 cores 
- 8 GB RAM 
- 40 GB storage 
- 2 NICs: NIC1: connected to "RENCI Research" VLAN
               NIC2: connected to “BEN Research" VLAN  
- DNS records and IP addresses:
  IP addresses from the RENCI Research subnet . 
  A records same as the VM names (and corresponding PTR records).

Best regards,

Mert  

VMs are created (as templates) on the VMware Cluster with the IP addresses below.

Mert,
The machines have been created.
No OS, as you requested.
DNS entries and PTR records created (you will have to assign ip addresses below when OS is installed):
152.54.3.142   aw-sdx-monitor.renci.org
152.54.3.143   aw-sdx-controller.renci.org
152.54.3.144   aw-sdx-lc-1.renci.org
152.54.3.145   aw-sdx-lc-2.renci.org
152.54.3.146   aw-sdx-lc-3.renci.org
mcevik0 commented 4 years ago

Ansible playbook : https://github.com/RENCI-NRIG/exogeni/tree/master/infrastructure/exogeni/exogeni-deployment/ansible

INVENTORY="atlanticwavesdx"
PLAYBOOK="atlanticwavesdxnodes.yml"

ansible atlanticwave_sdx -i ${INVENTORY} -m command -a "uname -a" -o -u root

ansible-playbook -i ${INVENTORY} ${PLAYBOOK} --limit atlanticwave_sdx --tags "common"
ansible-playbook -i ${INVENTORY} ${PLAYBOOK} --limit atlanticwave_sdx --tags "chrony"
ansible-playbook -i ${INVENTORY} ${PLAYBOOK} --limit atlanticwave_sdx --tags "fail2ban"
#ansible-playbook -i ${INVENTORY} ${PLAYBOOK} --limit atlanticwave_sdx --tags "sssd"
ansible-playbook -i ${INVENTORY} ${PLAYBOOK} --limit atlanticwave_sdx --tags "sshd"
ansible-playbook -i ${INVENTORY} ${PLAYBOOK} --limit atlanticwave_sdx --tags "docker"
ansible-playbook -i ${INVENTORY} ${PLAYBOOK} --limit atlanticwave_sdx --tags "check-mk-agent"
ansible-playbook -i ${INVENTORY} ${PLAYBOOK} --limit atlanticwave_sdx --tags "docker_users"

VMs are dual-homed with one public and one private interface on BEN Management network. VMs accept SSH connections through both public and private interfaces.

152.54.3.142  192.168.201.207  aw-sdx-monitor.renci.org
152.54.3.143  192.168.201.203  aw-sdx-controller.renci.org
152.54.3.144  192.168.201.204  aw-sdx-lc-1.renci.org
152.54.3.145  192.168.201.205  aw-sdx-lc-2.renci.org
152.54.3.146  192.168.201.206  aw-sdx-lc-3.renci.org

I also added port forwarding rules to the BEN gateway, however, routing this traffic will require a few more configurations. At this time, it can be easier to create the jenkins nodes with individual public IP addresses above.

SSH Port mappings
* tcp/20022 ---> tcp/22 on 192.168.201.203
* tcp/20122 ---> tcp/22 on 192.168.201.204
* tcp/20222 ---> tcp/22 on 192.168.201.205
* tcp/20322 ---> tcp/22 on 192.168.201.206

Firewall implemented via script (ansible role partially completed)


INTERFACE_PUBLIC="ens192"
INTERFACE_PRIVATE="ens224"

cat << EOF >> /etc/sysconfig/network-scripts/ifcfg-${INTERFACE_PUBLIC}
ZONE=public
EOF

cat << EOF >> /etc/sysconfig/network-scripts/ifcfg-${INTERFACE_PRIVATE}
ZONE=internal
EOF

systemctl restart network
systemctl status firewalld

firewall-cmd --permanent --zone=public --remove-service=ssh
firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="152.54.0.0/16" port protocol="tcp" port="22" accept'
firewall-cmd --reload
firewall-cmd --zone=public --list-all

# Allow all traffic on internal zone
for i in $(firewall-cmd --zone=internal --list-services); do 
   echo "--- Service: ${i}"; 
   firewall-cmd --zone=internal --permanent --remove-service=${i}; 
done
firewall-cmd --permanent --zone=internal --set-target=ACCEPT
firewall-cmd --reload
firewall-cmd --list-all --zone=public
firewall-cmd --list-all --zone=internal

# Bind interfaces to the zones
firewall-cmd --permanent --zone=internal --add-interface=${INTERFACE_PRIVATE}
firewall-cmd --permanent --zone=public   --add-interface=${INTERFACE_PUBLIC}
firewall-cmd --reload

cat << EOF >> /etc/sysconfig/network-scripts/route-${INTERFACE_PRIVATE}
192.168.0.0/16 via 192.168.100.1 dev ${INTERFACE_PRIVATE}
EOF
systemctl restart network

systemctl status firewalld
systemctl restart firewalld

nrig-service account created on the VMs, sdx-ci public key injected.

[nrig-service@sdx-ci ~]$ ssh nrig-service@aw-sdx-controller.renci.org

The authenticity of host 'aw-sdx-controller.renci.org (152.54.3.143)' can't be established.
ECDSA key fingerprint is SHA256:PerE4/r4paJXXgOQUepspcemKbte7KfzcHgd5k8Hbjs.
ECDSA key fingerprint is MD5:2e:b9:b3:ca:31:5c:f4:1b:b0:a1:ee:86:c9:a8:f6:7e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'aw-sdx-controller.renci.org,152.54.3.143' (ECDSA) to the list of known hosts.
Last login: Fri May 22 07:33:24 2020

[nrig-service@aw-sdx-controller ~]$ 
mcevik0 commented 4 years ago

Production environment:

Topology and equipment is shown on AtlanticWave SDX demonstration setup.

- Miami
- Miami Corsa     : 67.17.206.198
- Miami VM        : 190.103.186.106

- Atlanta
- SoX Corsa       : 143.215.216.3
- Baremetal server: 128.61.149.224
- awsdx-ctrl (VM) : 128.61.149.223
- awsdx-app (VM)  : 128.61.149.224
mcevik0 commented 4 years ago

ATL and MIA switches need cabling for rate-limiting VFCs.

Requested physical connections as below. Ports that does not have any connectors attached are selected. Intention is to keep as much as the previous setup in place until things look stable.

#
# MIAMI Corsa
#

# Rate limiting VFC cabling for L2Tunnel
Port 13 --- Port 15
Port 14 --- Port 16

# Rate limiting VFC cabling for L2Multipoint
Port 23 --- Port 25
Port 24 --- Port 26
#
# SOX Corsa
#

# Rate limiting VFC cabling for L2Tunnel
Port 25 --- Port 27
Port 26 --- Port 28

# Rate limiting VFC cabling for L2Multipoint
Port 29 --- Port 31
Port 30 --- Port 32
mcevik0 commented 4 years ago

MIAMI-Corsa connections are completed for ports 13,14,15,16,23,24,25,26.

amlight-corsa# show equipment module 
  +-------+-------+---------+----------------+-------+-----------------+---------+----------+-----------+----------+--------+
  | port  | type  |  name   |       pn       |  rev  |       sn        | channel | tx power | rx power  | tx fault | rx los |
  +-------+-------+---------+----------------+-------+-----------------+---------+----------+-----------+----------+--------+
  | 9     | SFP   |  10Gtek | CAB-Q10/4S-P3M | V01   | WTS31HA0188     | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 10    | SFP   |  10Gtek | CAB-Q10/4S-P3M | V01   | WTS31HA0188     | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 11    | SFP   |  10Gtek | CAB-Q10/4S-P3M | V01   | WTS31HA0188     | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 12    | SFP   |  10Gtek | CAB-Q10/4S-P3M | V01   | WTS31HA0188     | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 13    | SFP   | ELPEUS. | CB23123-1      | A     | 5584400391      | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 14    | SFP   | ELPEUS. | CB23123-1      | A     | 5584400445      | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 15    | SFP   | ELPEUS. | CB23123-1      | A     | 5584400391      | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 16    | SFP   | ELPEUS. | CB23123-1      | A     | 5584400445      | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 23    | SFP   | ELPEUS. | CB23123-1      | A     | 5584400167      | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 24    | SFP   | ELPEUS. | CB23123-1      | A     | 5584400411      | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 25    | SFP   | ELPEUS. | CB23123-1      | A     | 5584400167      | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 26    | SFP   | ELPEUS. | CB23123-1      | A     | 5584400411      | N/A     | N/A      | N/A       |   N/A    |  N/A   |
  | 29    | SFP   | BROCADE | 57-0000075-01  | A     | AAA314273005361 | N/A     | -2.3 dBm | -40.0 dBm |    0     |   1    |
  +-------+-------+---------+----------------+-------+-----------------+---------+----------+-----------+----------+--------+
mcevik0 commented 4 years ago

Existing switch configurations are saved. (configuration files are attached as well)

# MIAMI-Corsa

amlight-corsa# copy active-config backup 
Info: A backup file was created: amlight-corsa.bkp.0.030002.12.2020.06.09.180325.tar.bz2.
# SOX-Corsa

corsa-sdx-56m# copy active-config backup 
Info: A backup file was created: corsa-sdx-56m.bkp.0.030002.12.2020.06.09.180806.tar.bz2.

sox-corsa.config.txt miami-corsa.config.txt

mcevik0 commented 4 years ago

corsa-sdx-56m# show bridge +--------+------------------+----------+----+----+---------+-------------+---------+ | bridge | dpid | subtype | % | tc | tunnels | controllers | netns | +--------+------------------+----------+----+----+---------+-------------+---------+ | br2 | 00005ed26cc23f40 | openflow | 10 | 0 | 1 | 1 | default | | br10 | 00007a7e0683b44d | openflow | 15 | 0 | 4 | 1 | default | | br19 | 00006adf247c5b4b | openflow | 10 | 0 | 0 | 1 | default | | br20 | 000042c64d0ace40 | vpws | 1 | - | 190 | 1 | default | | br21 | 00000000000000c9 | openflow | 10 | 0 | 3 | 1 | default | +--------+------------------+----------+----+----+---------+-------------+---------+ corsa-sdx-56m# show bridge br21 dpid : 00000000000000c9 subtype : openflow resources : 10% traffic-class : 0 protocols : OpenFlow13 tunnels : 3 controllers : 1 corsa-sdx-56m# show bridge br21 controller count : 1 +----------+----------------+-------+-------+-----------+----------------------+-------+ | name | ip | port | tls | connected | status | role | +----------+----------------+-------+-------+-----------+----------------------+-------+ | CONTbr21 | 143.215.216.21 | 6681 | no | no | Connection timed out | other | +----------+----------------+-------+-------+-----------+----------------------+-------+ corsa-sdx-56m# show bridge br21 tunnel count : 3 +--------+---------+-------------+-------+-------+--------+-------+-------+-------+---------+ | ofport | ifdescr | type | port | vlan | tclass | tpid | inner | oper | v-range | +--------+---------+-------------+-------+-------+--------+-------+-------+-------+---------+ | 1 | | passthrough | 1 | - | 0 | - | - | up | - | | 29 | | passthrough | 29 | - | 0 | - | - | up | - | | 30 | | passthrough | 30 | - | 0 | - | - | up | - | +--------+---------+-------------+-------+-------+--------+-------+-------+-------+---------+

#

L2Tunnel Rate-limiting VFC

#

corsa-sdx-56m# show bridge br20 dpid : 000042c64d0ace40 subtype : vpws resources : 1% protocols : OpenFlow13 tunnels : 190 controllers : 1 corsa-sdx-56m# show bridge br20 controller count : 1 +-------+------------+-------+-------+-----------+--------+-------+ | name | ip | port | tls | connected | status | role | +-------+------------+-------+-------+-----------+--------+-------+ | Eline | 172.17.2.1 | 6653 | no | yes | | other | +-------+------------+-------+-------+-----------+--------+-------+

corsa-sdx-56m# show bridge br20 tunnel count : 190 +--------+---------+-------+-------+-------+--------+--------+-------+-------+---------+ | ofport | ifdescr | type | port | vlan | tclass | tpid | inner | oper | v-range | +--------+---------+-------+-------+-------+--------+--------+-------+-------+---------+ | 1 | | ctag | 31 | 3 | 0 | 0x8100 | - | up | - |

#

L2Multipoint Rate-limiting VFC

#

corsa-sdx-56m# show bridge br19 dpid : 00007a69b854ad4e subtype : l2-vpn resources : 10% traffic-class : 0 protocols : OpenFlow13 tunnels : 0 controllers : 1

corsa-sdx-56m# show bridge br19 controller count : 1 +----------+------------+-------+-------+-----------+--------+-------+ | name | ip | port | tls | connected | status | role | +----------+------------+-------+-------+-----------+--------+-------+ | CONTbr19 | 172.17.1.1 | 6653 | no | yes | | other | +----------+------------+-------+-------+-----------+--------+-------+

corsa-sdx-56m# show bridge br19 tunnel Info: There are no tunnels present.


- Corsa Switch at AMLight

#

Primary forwarding VFC

# amlight-corsa# show bridge br22 dpid : 00000000000000ca subtype : openflow resources : 10% traffic-class : 0 protocols : OpenFlow13 tunnels : 7 controllers : 1 amlight-corsa# show bridge br22 controller count : 1 +----------+-----------------+-------+-------+-----------+--------------------+-------+ | name | ip | port | tls | connected | status | role | +----------+-----------------+-------+-------+-----------+--------------------+-------+ | CONTbr22 | 190.103.186.106 | 6682 | no | no | Connection refused | other | +----------+-----------------+-------+-------+-----------+--------------------+-------+ amlight-corsa# show bridge br22 tunnel count : 7 +--------+---------+-------------+-------+-------+--------+-------+-------+-------+---------+ | ofport | ifdescr | type | port | vlan | tclass | tpid | inner | oper | v-range | +--------+---------+-------------+-------+-------+--------+-------+-------+-------+---------+ | 1 | | passthrough | 1 | - | 0 | - | - | down | - | | 2 | | passthrough | 2 | - | 0 | - | - | down | - | | 3 | | passthrough | 3 | - | 0 | - | - | down | - | | 4 | | passthrough | 4 | - | 0 | - | - | down | - | | 10 | | passthrough | 10 | - | 0 | - | - | up | - | | 23 | | passthrough | 23 | - | 0 | - | - | up | - | | 24 | | passthrough | 24 | - | 0 | - | - | up | - | +--------+---------+-------------+-------+-------+--------+-------+-------+-------+---------+

#

L2Tunnel Rate-limiting VFC

# amlight-corsa# show bridge br20 dpid : 000036d2c4038e4f subtype : vpws resources : 1% protocols : OpenFlow13 tunnels : 194 controllers : 1 amlight-corsa# show bridge br20 controller count : 1 +-------+------------+-------+-------+-----------+--------+-------+ | name | ip | port | tls | connected | status | role | +-------+------------+-------+-------+-----------+--------+-------+ | Eline | 172.17.2.1 | 6653 | no | yes | | other | +-------+------------+-------+-------+-----------+--------+-------+

amlight-corsa# show bridge br20 tunnel count : 194 +--------+---------+-------+-------+-------+--------+--------+-------+-------+---------+ | ofport | ifdescr | type | port | vlan | tclass | tpid | inner | oper | v-range | +--------+---------+-------+-------+-------+--------+--------+-------+-------+---------+

#

L2Multipoint Rate-limiting VFC

# amlight-corsa# show bridge br19 dpid : 00007236ba25ea41 subtype : l2-vpn resources : 10% traffic-class : 0 protocols : OpenFlow13 tunnels : 0 controllers : 1 amlight-corsa# show bridge br19 controller count : 1 +----------+------------+-------+-------+-----------+--------+-------+ | name | ip | port | tls | connected | status | role | +----------+------------+-------+-------+-----------+--------+-------+ | CONTbr19 | 172.17.1.1 | 6653 | no | yes | | other | +----------+------------+-------+-------+-----------+--------+-------+ amlight-corsa# show bridge br19 tunnel Info: There are no tunnels present.


- Servers

#

miami-vm

#

[root@sdxlc ~]# ifconfig ens224.1805 ens224.1805: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.14.11.2 netmask 255.255.255.0 broadcast 10.14.11.255 inet6 fe80::20c:29ff:fece:aee8 prefixlen 64 scopeid 0x20 ether 00:0c:29:ce:ae:e8 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8 bytes 656 (656.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

#

awsdx-ctrl

#

[root@awsdx-ctrl ~]# ip addr show dev eth2 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:1a:4a:10:00:12 brd ff:ff:ff:ff:ff:ff inet 10.100.1.21/24 brd 10.100.1.255 scope global eth2 valid_lft forever preferred_lft forever inet 10.14.11.1/24 scope global eth2 valid_lft forever preferred_lft forever inet6 fe80::21a:4aff:fe10:12/64 scope link valid_lft forever preferred_lft forever

#

awsdx-app

#

[root@awsdx-app ~]# ip addr show dev eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:1a:4a:10:00:26 brd ff:ff:ff:ff:ff:ff inet 10.100.1.22/24 brd 10.100.1.255 scope global eth1 valid_lft forever preferred_lft forever inet 10.100.2.22/24 brd 10.100.2.255 scope global eth1 valid_lft forever preferred_lft forever inet 10.14.11.254/24 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::21a:4aff:fe10:26/64 scope link valid_lft forever preferred_lft forever

mcevik0 commented 3 years ago

Connection details at Miami

From: Italo Da Silva Brito <idasilva@fiu.edu>
Subject: Re: Miami Corsa switch
Date: July 23, 2020 at 11:10:40 EDT
To: "Cevik, Mert" <mcevik@renci.org>
Cc: Adil Zahir <azahir@fiu.edu>, Jeronimo Bezerra <jbezerra@fiu.edu>, "Xin, Yufeng" <yxin@renci.org>, Julio Ibarra <Julio@fiu.edu>

Dear Mert,

I’ve traced the physical connections and ethernet circuits throughout AMPATH/AmLight network and 
here are the ports numbers:

Corsa/Atlanta (port xxx) — VLAN 1805 — (port 9) Corsa/Miami
Corsa/Miami (port 9) — 1807 — (port 33) Corsa/Chile
Corsa/Chile (port 33) — 1806 — (port XX) Corsa/Atlanta

The connections between the server in Miami (SDXLocalController) and the Corsa/Miami are the following:

SDX_LC_DATA_PLANE_28 
Miami VM (port xx) —  VLAN 28 — (port 9) Corsa/Miami

SDX_LC_DATA_PLANE_27
Miami VM (port xx) —  VLAN 27 — (port 9) Corsa/Miami

SDX_Trunk
Miami VM (port xx) —  VLAN 124,125,126 — (port 10) Corsa/Miami

If you need any additional help, please let us known.

Italo Valcy da Silva Brito, Senior Network Engineer
mcevik0 commented 3 years ago

CREATE VFCs with CTAG TUNNELS

#
# RENCI 1 (corsa-2.renci.ben)
#
configure bridge delete br21

configure port 1 tunnel-mode ctag
configure port 2 tunnel-mode ctag
configure port 11 tunnel-mode ctag
configure port 12 tunnel-mode ctag
configure port 19 tunnel-mode passthrough
configure port 20 tunnel-mode passthrough
configure port 23 tunnel-mode ctag
configure port 30 tunnel-mode ctag

configure bridge add br21 openflow resources 10
configure bridge br21 dpid 0xC9
configure bridge br21 tunnel attach ofport 1 port 1 vlan-range 1-1499
configure bridge br21 tunnel attach ofport 2 port 2 vlan-range 1-1499
configure bridge br21 tunnel attach ofport 11 port 11 vlan-range 1-1499
configure bridge br21 tunnel attach ofport 12 port 12 vlan-range 1-1499
configure bridge br21 tunnel attach ofport 19 port 19 
configure bridge br21 tunnel attach ofport 20 port 20 
configure bridge br21 tunnel attach ofport 23 port 23 vlan-range 1-1499
configure bridge br21 tunnel attach ofport 30 port 30 vlan-range 1-1499
configure bridge br21 controller add CONTbr21 192.168.201.196 6681

#
# DUKE
#
configure bridge delete br22

configure port 1 tunnel-mode ctag
configure port 2 tunnel-mode ctag
configure port 11 tunnel-mode ctag
configure port 12 tunnel-mode ctag
configure port 19 tunnel-mode passthrough
configure port 20 tunnel-mode passthrough

configure bridge add br22 openflow resources 10 
configure bridge br22 dpid 0xCA
configure bridge br22 tunnel attach ofport 1 port 1 vlan-range 1-1499
configure bridge br22 tunnel attach ofport 2 port 2 vlan-range 1-1499
configure bridge br22 tunnel attach ofport 11 port 11 vlan-range 1-1499
configure bridge br22 tunnel attach ofport 12 port 12 vlan-range 1-1499
configure bridge br22 tunnel attach ofport 19 port 19 
configure bridge br22 tunnel attach ofport 20 port 20 
configure bridge br22 controller add CONTbr22 192.168.202.39 6682

#
# UNC
#
configure bridge delete br23

configure port 1 tunnel-mode ctag
configure port 2 tunnel-mode ctag
configure port 11 tunnel-mode ctag
configure port 12 tunnel-mode ctag
configure port 19 tunnel-mode passthrough
configure port 20 tunnel-mode passthrough

configure bridge add br23 openflow resources 10 
configure bridge br23 dpid 0xCB
configure bridge br23 tunnel attach ofport 1 port 1 vlan-range 1-1499
configure bridge br23 tunnel attach ofport 2 port 2 vlan-range 1-1499
configure bridge br23 tunnel attach ofport 11 port 11 vlan-range 1-1499
configure bridge br23 tunnel attach ofport 12 port 12 vlan-range 1-1499
configure bridge br23 tunnel attach ofport 19 port 19 
configure bridge br23 tunnel attach ofport 20 port 20 
configure bridge br23 controller add CONTbr23 192.168.203.10 6683

#
# NCSU
#
configure bridge delete br24

configure port 1 tunnel-mode ctag
configure port 2 tunnel-mode ctag
configure port 11 tunnel-mode ctag
configure port 12 tunnel-mode ctag
configure port 19 tunnel-mode passthrough
configure port 20 tunnel-mode passthrough

configure bridge add br24 openflow resources 10 
configure bridge br24 dpid 0xCC
configure bridge br24 tunnel attach ofport 1 port 1 vlan-range 1-1499
configure bridge br24 tunnel attach ofport 2 port 2 vlan-range 1-1499
configure bridge br24 tunnel attach ofport 11 port 11 vlan-range 1-1499
configure bridge br24 tunnel attach ofport 12 port 12 vlan-range 1-1499
configure bridge br24 tunnel attach ofport 19 port 19 
configure bridge br24 tunnel attach ofport 20 port 20 
configure bridge br24 controller add CONTbr24  192.168.204.21 6684

#
# RENCI 2 (corsa-1.renci.ben)
#
configure bridge delete br25
configure port 8 tunnel-mode ctag
configure port 23 tunnel-mode ctag
configure port 25 tunnel-mode passthrough
configure port 26 tunnel-mode passthrough

configure bridge add br25 openflow resources 2
configure bridge br25 dpid 0xCD
configure bridge br25 tunnel attach ofport 8 port 8  vlan-range 1-1499
configure bridge br25 tunnel attach ofport 23 port 23  vlan-range 1-1499
configure bridge br25 tunnel attach ofport 25 port 25
configure bridge br25 tunnel attach ofport 26 port 26
configure bridge br25 controller add CONTbr25 192.168.201.196 6681

VFC configurations with CTAG TUNNELS


#
# RENCI 1 (corsa-2.renci.ben)
#

corsa-2# show bridge br21
  dpid          : 00000000000000c9
  subtype       : openflow
  resources     : 10%
  traffic-class : 0
  protocols     : OpenFlow13
  tunnels       : 8
  controllers   : 1
corsa-2# show bridge br21 controller 
                                                                    count : 1
  +----------+-----------------+-------+-------+-----------+--------+-------+
  |   name   |       ip        | port  |  tls  | connected | status | role  |
  +----------+-----------------+-------+-------+-----------+--------+-------+
  | CONTbr21 | 192.168.201.196 |  6681 |  no   |    yes    |        | other |
  +----------+-----------------+-------+-------+-----------+--------+-------+
corsa-2# show bridge br21 tunnel 
                                                                                       count : 8
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+
  | ofport | ifdescr |    type     | port  | vlan  | tclass |  tpid  | inner | oper  | v-range |
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+
  |      1 |         | vlan-range  |     1 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |      2 |         | vlan-range  |     2 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     11 |         | vlan-range  |    11 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     12 |         | vlan-range  |    12 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     19 |         | passthrough |    19 |     - |      0 |      - |     - |  up   | -       |
  |     20 |         | passthrough |    20 |     - |      0 |      - |     - |  up   | -       |
  |     23 |         | vlan-range  |    23 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     30 |         | vlan-range  |    30 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+

#
# DUKE (corsa-1.duke.ben)
#

corsa-1# show bridge br22
  dpid          : 00000000000000ca
  subtype       : openflow
  resources     : 10%
  traffic-class : 0
  protocols     : OpenFlow13
  tunnels       : 6
  controllers   : 1
corsa-1# show bridge br22 controller 
                                                                   count : 1
  +----------+----------------+-------+-------+-----------+--------+-------+
  |   name   |       ip       | port  |  tls  | connected | status | role  |
  +----------+----------------+-------+-------+-----------+--------+-------+
  | CONTbr22 | 192.168.202.39 |  6682 |  no   |    yes    |        | other |
  +----------+----------------+-------+-------+-----------+--------+-------+
corsa-1# show bridge br22 tunnel 
                                                                                       count : 6
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+
  | ofport | ifdescr |    type     | port  | vlan  | tclass |  tpid  | inner | oper  | v-range |
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+
  |      1 |         | vlan-range  |     1 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |      2 |         | vlan-range  |     2 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     11 |         | vlan-range  |    11 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     12 |         | vlan-range  |    12 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     19 |         | passthrough |    19 |     - |      0 |      - |     - |  up   | -       |
  |     20 |         | passthrough |    20 |     - |      0 |      - |     - |  up   | -       |
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+

#
# UNC (corsa-1.unc.ben)
#

corsa-1# show bridge br23 
  dpid          : 00000000000000cb
  subtype       : openflow
  resources     : 10%
  traffic-class : 0
  protocols     : OpenFlow13
  tunnels       : 6
  controllers   : 1
corsa-1# show bridge br23 controller 
                                                                   count : 1
  +----------+----------------+-------+-------+-----------+--------+-------+
  |   name   |       ip       | port  |  tls  | connected | status | role  |
  +----------+----------------+-------+-------+-----------+--------+-------+
  | CONTbr23 | 192.168.203.10 |  6683 |  no   |    yes    |        | other |
  +----------+----------------+-------+-------+-----------+--------+-------+
corsa-1# show bridge br23 tunnel 
                                                                                       count : 6
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+
  | ofport | ifdescr |    type     | port  | vlan  | tclass |  tpid  | inner | oper  | v-range |
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+
  |      1 |         | vlan-range  |     1 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |      2 |         | vlan-range  |     2 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     11 |         | vlan-range  |    11 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     12 |         | vlan-range  |    12 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     19 |         | passthrough |    19 |     - |      0 |      - |     - |  up   | -       |
  |     20 |         | passthrough |    20 |     - |      0 |      - |     - |  up   | -       |
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+

#
# NCSU (corsa-1.ncsu.ben)
#

corsa-1# show bridge br24
  dpid          : 00000000000000cc
  subtype       : openflow
  resources     : 10%
  traffic-class : 0
  protocols     : OpenFlow13
  tunnels       : 6
  controllers   : 1
corsa-1# show bridge br24 controller 
                                                                   count : 1
  +----------+----------------+-------+-------+-----------+--------+-------+
  |   name   |       ip       | port  |  tls  | connected | status | role  |
  +----------+----------------+-------+-------+-----------+--------+-------+
  | CONTbr24 | 192.168.204.21 |  6684 |  no   |    yes    |        | other |
  +----------+----------------+-------+-------+-----------+--------+-------+
corsa-1# show bridge br24 tunnel 
                                                                                       count : 6
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+
  | ofport | ifdescr |    type     | port  | vlan  | tclass |  tpid  | inner | oper  | v-range |
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+
  |      1 |         | vlan-range  |     1 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |      2 |         | vlan-range  |     2 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     11 |         | vlan-range  |    11 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     12 |         | vlan-range  |    12 |     - |      0 | 0x8100 |     - |  up   | 1-1499  |
  |     19 |         | passthrough |    19 |     - |      0 |      - |     - |  up   | -       |
  |     20 |         | passthrough |    20 |     - |      0 |      - |     - |  up   | -       |
  +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+

Test with L2Tunnel connections

#
# SCRIPT - Create L2Tunnel Connections
#

SDX_CONTROLLER="atlanticwave-sdx-controller.renci.ben"
SDX_CONT_PORT=5000
REST_ENDPOINT="http://${SDX_CONTROLLER}:${SDX_CONT_PORT}/api/v1/policies/type/L2Tunnel"
COOKIE="cookie-mcevik.txt"

SRC_SW="rencis1"
DST_SW="dukes1"
SRC_VLAN=1421
DST_VLAN=1422
SRC_PORT=12
DST_PORT=12
START=`date "+%Y-%m-%dT%H:%M:%S"`
END=`date --date="3 days" "+%Y-%m-%dT%H:%M:%S"`

curl \
-X POST ${REST_ENDPOINT} \
-b ${COOKIE} \
-H "Content-Type: application/json" \
--data-binary @- << EOF 
{
  "L2Tunnel":
    {
      "starttime":"${START}",
      "endtime":"${END}",
      "srcswitch":"${SRC_SW}","dstswitch":"${DST_SW}",
      "srcport":${SRC_PORT},"dstport":${DST_PORT},
      "srcvlan":${SRC_VLAN},"dstvlan":${DST_VLAN},
      "bandwidth":800000
    }
}
EOF

#
# SCRIPT - Delete policy
#

POLICY="17"
curl \
-b cookie-mcevik.txt \
-H "Content-Type: application/json" \
-X DELETE http://atlanticwave-sdx-controller.renci.ben:5000/api/v1/policies/number/${POLICY}

#
# GET POLICY INFO
#

./curl-0.sh -c cookie-mcevik.txt -o get_policies
./curl-0.sh -c cookie-mcevik.txt -o get_policy -N 18

Actual connections


[root@atlanticwave-sdx-controller ~]# cd ~/script-sdx/

#
# CREATE Connections
#

# Create L2Tunnel Connection SRC:RENCI Port:12 VLAN:1421 Bw:800000 --- DST: DUKE Port:12 VLAN:1422 Bw:800000  
./l2tunnel-create.sh rencis1 dukes1 1421 1422

# Create L2Tunnel Connection SRC:RENCI Port:12 VLAN:1421 Bw:800000 --- DST: UNC Port:12 VLAN:1423 Bw:800000  
./l2tunnel-create.sh rencis1 uncs1 1421 1423

# Create L2Tunnel Connection SRC:RENCI Port:12 VLAN:1421 Bw:800000 --- DST: NCSU Port:12 VLAN:1424 Bw:800000  
./l2tunnel-create.sh rencis1 ncsus1 1421 1424

# Create L2Tunnel Connection SRC:UNC Port:12 VLAN:1423 Bw:800000 --- DST: DUKE Port:12 VLAN:1422 Bw:800000  
./l2tunnel-create.sh uncs1 dukes1 1423 1422

# Create L2Tunnel Connection SRC:UNC Port:12 VLAN:1423 Bw:800000 --- DST: NCSU Port:12 VLAN:1424 Bw:800000  
./l2tunnel-create.sh uncs1 ncsus1 1423 1424

# Create L2Tunnel Connection SRC:DUKE Port:12 VLAN:1422 Bw:800000 --- DST: NCSU Port:12 VLAN:1424 Bw:800000  
./l2tunnel-create.sh dukes1 ncsus1 1422 1424

These connections all worked well. Note that CTAG tunnels are created over the vlan range 1-1499 that covers all VLAN tags for the in-band management (1411) and the intermediate VLAN tags that are assigned by the SDX controller (1,2,3 ... )

mcevik0 commented 3 years ago

Multiple issues worth to clarify before going back to the production setup. We have tried everything we can based on the documents for last year's demo from GATech. But we don't think it would work based on the current SDX implementation. These issues will be referenced to the current available resources and possible production setup.

In conclusion, we need the following from you to make the production testbed work.

  1. A range of VLANs with same tags on all sites for dataplane.

  2. Extra cabling between the Corsa switches and the sox/ampath/santiago switches. We manually verified ctag configuration could work in Corsa switches, however the current code is underimplemented to support this.

  3. For in-band management, in order to use different VLAN tags for each site, we need to know how the AL2S circuits are created. We need a multipoint connection as described on item 1 (below), if the given VLAN tags have to be used. Ideally, using the same VLAN tag on all sites for in-band management will be preferred.

Details about the issues are as follows:

  1. VLAN tag for in-band management connections should be the same across all sites or a multipoint AL2S connection should be created in order to leverage the VLAN tag translation.

    1.1 - On the RENCI Testbed, I tested using a dedicated in-band management VLAN tag for each site, specified the VLAN tag in the manifest, created tagged interfaces on the local-controller servers and sdx-controller server. atlanticwave-sdx/atlanticwave-proto@a1dcd117 Each local controller is aware of its own in-band management VLAN tag and creates flows for it. For example on RENCI switch, flows below are pushed for its own VLAN 1311.
    DUKE switch is connected to the SDX controller through RENCI switch, therefore we need flows for VLAN 1312 which are not created, eventually DUKE LC cannot connect to SDX controller.

 +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  | table | prio  |           match           |          actions          | cookie | packets | bytes | idle t.o. | hard t.o. | duration |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |     0 | -                         | goto_table:1              |    0x0 |      34 |  2200 |         - |         - |  29.444s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=1,dl_vlan=1311    | output:2,output:11,       |    0x0 |       0 |     0 |         - |         - |  29.444s |
  |       |       |                           | output:23,output:30       |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=2,dl_vlan=1311    | output:1,output:11,       |    0x0 |       0 |     0 |         - |         - |  29.444s |
  |       |       |                           | output:23,output:30       |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=11,dl_vlan=1311   | output:1,output:2,        |    0x0 |      22 |  2101 |         - |         - |  29.443s |
  |       |       |                           | output:23,output:30       |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=23,dl_vlan=1311   | output:1,output:2,        |    0x0 |       0 |     0 |         - |         - |  29.442s |
  |       |       |                           | output:11,output:30       |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=30,dl_vlan=1311   | output:1,output:2,        |    0x0 |      20 |  3204 |         - |         - |  29.437s |
  |       |       |                           | output:11,output:23       |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+

1.2 - Alternatively, on the production setup, topology with 3 sites (with the assumption that SDX will be running on SoX/Atlanta), Internet2-AL2S will be doing the VLAN tag translation, and a multipoint AL2S circuit can be used. In-band management traffic towards Miami and Santiago will be carried over VLAN 3621 on SoX/Atlanta.

MIA: 1805 -------- ATLA:3621
             |
             |
       SANTIAGO:1806
  1. We need a VLAN range that covers all sites for dataplane connections. Intermediate VLANs across sites is not selective per site and hardcoded as (1,4096) in TopologyManager.py:find_vlan_on_path We can change this hardcoded value for the available VLAN range, however we need the same VLAN range that is plumbed on all sites because the current implementation requires VLAN continuity for both L2Tunnel and point-to-multipoint connections.

For dataplane connections, flows as below are pushed. In this case, VLAN 1422 for the edge site is requested, but intermediate VLAN 1 is used for inter-site connection. This intermediate VLAN tag needs to be same on all sites.

  |     0 |   100 | in_port=1,dl_vlan=1       | set_field:4097->vlan_vid, |    0x4 |       0 |     0 |         - |         - |  71.071s |
  |       |       |                           | output:20                 |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=12,dl_vlan=1422   | set_field:4097->vlan_vid, |    0x4 |       0 |     0 |         - |         - |  71.071s |
  |       |       |                           | output:19                 |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=19,dl_vlan=1      | set_field:5518->vlan_vid, |    0x4 |       0 |     0 |         - |         - |  71.071s |
  |       |       |                           | output:12                 |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=20,dl_vlan=1      | set_field:4097->vlan_vid, |    0x4 |       0 |     0 |         - |         - |  71.071s |
  |       |       |                           | output:1                  |        |         |       |           |           |          |
  1. On the Corsa switches, connections to the servers and other sites are made through one physical port. With CTAG tunnels, we can attach multiple openflow (logical) ports to the physical port. However, this requires allocation of separate VLAN ranges.
    
    # Attempt to attach tunnels to ofport 1 and 2 by using the same VLAN range 101-120
    corsa-2# configure bridge br21 tunnel attach ofport 1 port 1 vlan-range 101-120
    corsa-2# configure bridge br21 tunnel attach ofport 2 port 1 vlan-range 101-120
    Error: Attach failed: tunnel conflicts.
    The command did not complete and has been discarded.

Tunnel for ofport 2 can be attached if a different VLAN range is used

corsa-2# configure bridge br21 tunnel attach ofport 2 port 1 vlan-range 121-130 corsa-2# show bridge br21 tunnel count : 7 +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+ | ofport | ifdescr | type | port | vlan | tclass | tpid | inner | oper | v-range | +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+ | 1 | | vlan-range | 1 | - | 0 | 0x8100 | - | up | 101-120 | | 2 | | vlan-range | 1 | - | 0 | 0x8100 | - | up | 121-130 |

For the reasons in item 2, same intermediate VLAN tag will be used for ingress and egress traffic. Therefore, physically separation of traffic is needed. Tunnels with the same VLAN range can be attached if separate physical ports are used. 

corsa-1# show bridge br22 tunnel count : 6 +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+ | ofport | ifdescr | type | port | vlan | tclass | tpid | inner | oper | v-range | +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+ | 1 | | vlan-range | 1 | - | 0 | 0x8100 | - | up | 1-1499 | | 2 | | vlan-range | 2 | - | 0 | 0x8100 | - | up | 1-1499 | | 11 | | vlan-range | 11 | - | 0 | 0x8100 | - | up | 1-1499 | | 12 | | vlan-range | 12 | - | 0 | 0x8100 | - | up | 1-1499 | | 19 | | passthrough | 19 | - | 0 | - | - | up | - | | 20 | | passthrough | 20 | - | 0 | - | - | up | - | +--------+---------+-------------+-------+-------+--------+--------+-------+-------+---------+

mcevik0 commented 3 years ago

Santiago Switch

Loop cables are missing and will take time to add the cables for the rate-limiting VFCs (br19, br20) . We need to find out how to bypass ratelimiting VFC to exchange traffic. This may require changes in the code, starting from exploration of options to enable/disable ratelimiting.

mcevik0 commented 3 years ago

Checklist to validate connection

(1) MIA LC to Corsa: port, vlans, and connectivity; 
(2) Chile LC to Corsa: port, vlans, and connectivity; 
(3) Atl LC to Corsa: port, vlans, and connectivity; 
(4) Link Mia Corsa to Chile Corsa: port, VLAN (s-tag), and connectivity; 
(5) Link MIA to Atlanta Corsa: port, VLAN (s-tag), and connectivity; 
(6) Link Atlanta Corsa to Chile Corsa: port, VLAN (s-tag), and connectivity; 
(7) Path Mia LC to Chile LC: port, VLAN (c-tag), and connectivity; 
(8) Path Mia LC to Atlanta LC: port, VLAN (c-tag), and connectivity; 
(9) Path Atlanta LC to Chile LC: port, VLAN (c-tag), and connectivity;
jab1982 commented 3 years ago

We created the environment in Miami and Chile, organizing the VLANs and ports. Bridges br25 were created, with the following tunnels: Miami: ofport 1 - port 9 - CTAG 1805 - Description: Atlanta ofport 2 - port 10 - CTAG 1807 - Description: Chile ofport 3 - port 9 - CTAG 27 - Description: SDX-LC Miami interface ens224 - Data Plane ofport 4 - port 9 - CTAG 28 - Description: SDX-LC Miami interface ens256 - Data Plane Chile: ofport 1 - port 33 - ctag 1807 - Description: Miami ofport 2 - port 33 - ctag 1806 - Description: Atlanta ofport 3 - port 33 - ctag 1808 - Description: DTN ofport 4 - port 37 - ctag 1809 - Description: DTN-2

jab1982 commented 3 years ago

We discovered an issue with our Dell switches in the path: they are resetting the VLAN ID fields, removing the inner tags. We are troubleshooting this issue now. If we don't find a solution, we will move the config from S-VLAN to a VLAN range environment, with 5-10 VLANs between each pair of Corsa switches.

jab1982 commented 3 years ago
image
jab1982 commented 3 years ago

@mcevik0, I still don't have access to any device in Atlanta. Please, create bridge br25 and the tunnels following the same idea and let me know. I will update the diagram.

mcevik0 commented 3 years ago

br25 on SoX-Corsa

corsa-sdx-56m# show bridge br25
  dpid          : 0000de0a45e6734d
  subtype       : openflow
  resources     : 10%
  traffic-class : 0
  protocols     : OpenFlow13
  tunnels       : 3
  controllers   : 0

corsa-sdx-56m# show bridge br25 tunnel 
                                                                                             count : 3
  +--------+---------------------+-------+-------+-------+--------+--------+-------+-------+---------+
  | ofport |       ifdescr       | type  | port  | vlan  | tclass |  tpid  | inner | oper  | v-range |
  +--------+---------------------+-------+-------+-------+--------+--------+-------+-------+---------+
  |      1 | connection to Miami | ctag  |     1 |  1805 |      0 | 0x8100 |     - |  up   | -       |
  |      2 | connection to Chile | ctag  |     1 |  1806 |      0 | 0x8100 |     - |  up   | -       |
  |      3 | connection to DTN   | ctag  |     1 |  3621 |      0 | 0x8100 |     - |  up   | -       |
  +--------+---------------------+-------+-------+-------+--------+--------+-------+-------+---------+

@jab1982

jab1982 commented 3 years ago

Connectivity from AMPATH Corsa and DTN to the SOX Corsa switch is working. In Chile, using the S-VLAN approach isn't working because the LSST router is dropping the QinQ packets. A case was opened with Cisco TAC. Connectivity from SOX Corsa switch to SOX DTN was not tested yet.

jab1982 commented 3 years ago

The SDX testbed was fixed, tested, and documented. Details shared via Slack.

mcevik0 commented 3 years ago

I think it will be better to post technical info to this github issue. So, I'm copying from Slack. https://renci.slack.com/archives/CRS2KPHFV/p1599079144028000

Now, all you have to do is push your flows
I’ve left a flow for VLAN 3006 as an example
All nodes can talk to each other on that VLAN
via SDX
SOX Corsa:
port 1
 ifdescr "SoX Juniper Rtr - DAC"
 tunnel-mode ctag
!
bridge add br25 openflow resources 10
!
bridge br25
 dpid 0000de0a45e6734d
 tunnel attach ofport 1 port 1 vlan-id 1805
 tunnel ofport 1
 ifdescr "connection to Miami"
 tunnel attach ofport 2 port 1 vlan-id 1806
 tunnel ofport 2
 ifdescr "connection to Chile"
 tunnel attach ofport 5 port 1 vlan-id 3922
 tunnel ofport 5
 ifdescr "SOX_DTN"
!
AMPATH Corsa:
port 9
 ifdescr "Z9100-te1/1/1"
 mtu 9022
 tunnel-mode ctag
!
port 10
 ifdescr "Z9100-te1/1/2"
 mtu 9022
 tunnel-mode passthrough
!
port 12
 ifdescr "SDX-DTN via Dell te-1/1/4"
 mtu 9022
 tunnel-mode passthrough
!
bridge add br25 openflow resources 2
!
bridge br25
 bridge-descr "AWSDX-Corsa-Miami"
 dpid 0000ae996129b641
 tunnel attach ofport 1 port 9 vlan-id 1805
 tunnel ofport 1
 ifdescr "Atlanta"
 tunnel attach ofport 2 port 10
 tunnel ofport 2
 ifdescr "Chile"
 tunnel attach ofport 3 port 12
 tunnel ofport 3
 ifdescr "Miami-DTN"
!
Chile Corsa:
port 33
 ifdescr "LSST-Router-50"
 fec none
 mtu 9022
 tunnel-mode passthrough
 bandwidth set 100000M
!
port 37
 ifdescr "100G-DTN"
 fec none
 mtu 9022
 tunnel-mode passthrough
 bandwidth set 100000M
!
bridge add br25 openflow resources 2
!
bridge br25
 bridge-descr "AWSDX-Corsa-Chile"
 dpid 00004abba5f3ce47
 tunnel attach ofport 1 port 33
 tunnel attach ofport 2 port 37
!
mcevik0 commented 3 years ago

More copied from Slack https://renci.slack.com/archives/CRS2KPHFV/p1599080323031400

The hosts are still configured
I usually use
For VLANs 1-99:  10.0.VLAN.0/24
For VLANs 100+: break VLAN in two digits 10.FIRST_TWO.SECOND_TWO.0/24
For instance, VLAN 3006: 10.30.06.0/24
VLAN 990: 10.9.90.0/24
I left VLAN 3006 in place with IP addresses:
Miami: 10.30.06.1/24
Chile: 10.30.06.2/24
Atlanta: 10.30.06.3/24
No mac-learning of course
mcevik0 commented 3 years ago

Current status of the VFCs

MIAMI

miami-corsa# show bridge br25
  dpid          : 0000ae996129b641
  subtype       : openflow
  resources     : 2%
  traffic-class : 0
  bridge-descr  : AWSDX-Corsa-Miami
  protocols     : OpenFlow13
  tunnels       : 3
  controllers   : 0

miami-corsa# show bridge br25 controller 
Info: There are no controllers present.

miami-corsa# show bridge br25 tunnel 
                                                                                         count : 3
  +--------+-----------+-------------+-------+-------+--------+--------+-------+-------+---------+
  | ofport |  ifdescr  |    type     | port  | vlan  | tclass |  tpid  | inner | oper  | v-range |
  +--------+-----------+-------------+-------+-------+--------+--------+-------+-------+---------+
  |      1 | Atlanta   | ctag        |     9 |  1805 |      0 | 0x8100 |     - |  up   | -       |
  |      2 | Chile     | passthrough |    10 |     - |      0 |      - |     - |  up   | -       |
  |      3 | Miami-DTN | passthrough |    12 |     - |      0 |      - |     - |  up   | -       |
  +--------+-----------+-------------+-------+-------+--------+--------+-------+-------+---------+

miami-corsa# show openflow flow br25
                                              count : 6
  +-------+-------+------------------------+----------+
  | table | prio  |         match          | actions  |
  +-------+-------+------------------------+----------+
  |     0 |     - | in_port=1,dl_vlan=3006 | output:3 |
  |     0 |     - | in_port=2,dl_vlan=3001 | output:3 |
  |     0 |     - | in_port=2,dl_vlan=3003 | output:3 |
  |     0 |     - | in_port=3,dl_vlan=3001 | output:2 |
  |     0 |     - | in_port=3,dl_vlan=3003 | output:2 |
  |     0 |     - | in_port=3,dl_vlan=3006 | output:1 |
  +-------+-------+------------------------+----------+

SOX

corsa-sdx-56m# show bridge br25
  dpid          : 0000de0a45e6734d
  subtype       : openflow
  resources     : 10%
  traffic-class : 0
  protocols     : OpenFlow13
  tunnels       : 3
  controllers   : 0
corsa-sdx-56m# show bridge br25 controller 
Info: There are no controllers present.
corsa-sdx-56m# show bridge br25 tunnel 
                                                                                             count : 3
  +--------+---------------------+-------+-------+-------+--------+--------+-------+-------+---------+
  | ofport |       ifdescr       | type  | port  | vlan  | tclass |  tpid  | inner | oper  | v-range |
  +--------+---------------------+-------+-------+-------+--------+--------+-------+-------+---------+
  |      1 | connection to Miami | ctag  |     1 |  1805 |      0 | 0x8100 |     - |  up   | -       |
  |      2 | connection to Chile | ctag  |     1 |  1806 |      0 | 0x8100 |     - |  up   | -       |
  |      5 | SOX_DTN             | ctag  |     1 |  3922 |      0 | 0x8100 |     - |  up   | -       |
  +--------+---------------------+-------+-------+-------+--------+--------+-------+-------+---------+
corsa-sdx-56m# show openflow flow br25
                                                       count : 4
  +-------+-------+------------------------+-------------------+
  | table | prio  |         match          |      actions      |
  +-------+-------+------------------------+-------------------+
  |     0 |   100 | in_port=1,dl_vlan=3006 | output:5,output:2 |
  |     0 |   100 | in_port=2,dl_vlan=3006 | output:5,output:1 |
  |     0 |     - | in_port=5              | drop              |
  |     0 |     - | in_port=5,dl_vlan=3006 | output:1,output:2 |
  +-------+-------+------------------------+-------------------+

CHILE

lsst-corsa# show bridge br25
  dpid          : 00004abba5f3ce47
  subtype       : openflow
  resources     : 2%
  traffic-class : 0
  bridge-descr  : AWSDX-Corsa-Chile
  protocols     : OpenFlow13
  tunnels       : 2
  controllers   : 0
lsst-corsa# show bridge br25 controller 
Info: There are no controllers present.
lsst-corsa# show bridge br25 tunnel 
                                                                                      count : 2
  +--------+---------+-------------+-------+-------+--------+-------+-------+-------+---------+
  | ofport | ifdescr |    type     | port  | vlan  | tclass | tpid  | inner | oper  | v-range |
  +--------+---------+-------------+-------+-------+--------+-------+-------+-------+---------+
  |      1 |         | passthrough |    33 |     - |      0 |     - |     - |  up   | -       |
  |      2 |         | passthrough |    37 |     - |      0 |     - |     - |  up   | -       |
  +--------+---------+-------------+-------+-------+--------+-------+-------+-------+---------+
lsst-corsa# show openflow flow br25
                                              count : 6
  +-------+-------+------------------------+----------+
  | table | prio  |         match          | actions  |
  +-------+-------+------------------------+----------+
  |     0 |     - | in_port=1,dl_vlan=330  | output:2 |
  |     0 |     - | in_port=1,dl_vlan=3001 | output:2 |
  |     0 |     - | in_port=1,dl_vlan=3006 | output:2 |
  |     0 |     - | in_port=2,dl_vlan=330  | output:1 |
  |     0 |     - | in_port=2,dl_vlan=3001 | output:1 |
  |     0 |     - | in_port=2,dl_vlan=3006 | output:1 |
  +-------+-------+------------------------+----------+
mcevik0 commented 3 years ago

@jab1982 - I am having a difficulty to find the IP addresses below on the servers (MIAMI and CHILE). https://renci.slack.com/archives/CRS2KPHFV/p1599080444034100

I left VLAN 3006 in place with IP addresses:
Miami: 10.30.06.1/24
Chile: 10.30.06.2/24
Atlanta: 10.30.06.3/24
No mac-learning of course

I am looking at the servers below.

# Miami
[root@sdxlc ~]# hostname
sdxlc.ampath.net

# Atlanta
[root@awsdx-app ~]# hostname
awsdx-app.cloud.rnoc.gatech.edu

[root@awsdx-ctrl ~]# hostname
awsdx-ctrl.cloud.rnoc.gatech.edu

# Chile
root@acanets-chile:~# hostname
acanets-chile

Can you let me know which servers are used?

mcevik0 commented 3 years ago

I can login to the Miami-DTN, Chile-DTN, Atlanta-DTN shown on the topology drawing. I confirm that traffic can be exchanged across sites. So I'm putting the reference information here to be present when we need it.

MIAMI

[root@s1 ~]# ip addr show dev enp1s0.3006
7: enp1s0.3006@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 00:60:dd:45:47:ff brd ff:ff:ff:ff:ff:ff
    inet 10.30.6.1/24 brd 10.30.6.255 scope global enp1s0.3006
       valid_lft forever preferred_lft forever
    inet6 fe80::260:ddff:fe45:47ff/64 scope link 
       valid_lft forever preferred_lft forever

[root@s1 ~]# ping -c 3 10.30.6.2
PING 10.30.6.2 (10.30.6.2) 56(84) bytes of data.
64 bytes from 10.30.6.2: icmp_seq=1 ttl=64 time=190 ms
64 bytes from 10.30.6.2: icmp_seq=2 ttl=64 time=189 ms
64 bytes from 10.30.6.2: icmp_seq=3 ttl=64 time=189 ms

--- 10.30.6.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 189.774/189.856/190.003/0.104 ms

[root@s1 ~]# ping -c 3 10.30.6.3
PING 10.30.6.3 (10.30.6.3) 56(84) bytes of data.
64 bytes from 10.30.6.3: icmp_seq=1 ttl=64 time=19.0 ms
64 bytes from 10.30.6.3: icmp_seq=2 ttl=64 time=18.8 ms
64 bytes from 10.30.6.3: icmp_seq=3 ttl=64 time=18.8 ms

--- 10.30.6.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 18.854/18.932/19.047/0.139 ms

CHILE

[root@dtn01 ~]# ip addr show dev enp6s0.3006
19: enp6s0.3006@enp6s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 50:6b:4b:cc:f4:12 brd ff:ff:ff:ff:ff:ff
    inet 10.30.6.2/24 brd 10.30.6.255 scope global enp6s0.3006
       valid_lft forever preferred_lft forever
    inet6 fe80::526b:4bff:fecc:f412/64 scope link 
       valid_lft forever preferred_lft forever

[root@dtn01 ~]# ping -c 3 10.30.6.1
PING 10.30.6.1 (10.30.6.1) 56(84) bytes of data.
64 bytes from 10.30.6.1: icmp_seq=1 ttl=64 time=190 ms
64 bytes from 10.30.6.1: icmp_seq=2 ttl=64 time=190 ms
64 bytes from 10.30.6.1: icmp_seq=3 ttl=64 time=189 ms

--- 10.30.6.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 189.865/189.978/190.046/0.080 ms

[root@dtn01 ~]# ping -c 3 10.30.6.3
PING 10.30.6.3 (10.30.6.3) 56(84) bytes of data.
64 bytes from 10.30.6.3: icmp_seq=1 ttl=64 time=171 ms
64 bytes from 10.30.6.3: icmp_seq=2 ttl=64 time=171 ms
64 bytes from 10.30.6.3: icmp_seq=3 ttl=64 time=171 ms

--- 10.30.6.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 171.817/171.835/171.862/0.019 ms

ATLANTA

[root@awsdx-app ~]# ip addr show dev eth2.3006
14: eth2.3006@eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:1a:4a:10:00:0f brd ff:ff:ff:ff:ff:ff
    inet 10.30.6.3/24 brd 10.30.6.255 scope global eth2.3006
       valid_lft forever preferred_lft forever
    inet6 fe80::21a:4aff:fe10:f/64 scope link 
       valid_lft forever preferred_lft forever

[root@awsdx-app ~]# ping -c 3 10.30.6.1
PING 10.30.6.1 (10.30.6.1) 56(84) bytes of data.
64 bytes from 10.30.6.1: icmp_seq=1 ttl=64 time=18.9 ms
64 bytes from 10.30.6.1: icmp_seq=2 ttl=64 time=18.9 ms
64 bytes from 10.30.6.1: icmp_seq=3 ttl=64 time=19.0 ms

--- 10.30.6.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 18.918/18.974/19.072/0.069 ms

[root@awsdx-app ~]# ping -c 3 10.30.6.2
PING 10.30.6.2 (10.30.6.2) 56(84) bytes of data.
64 bytes from 10.30.6.2: icmp_seq=1 ttl=64 time=171 ms
64 bytes from 10.30.6.2: icmp_seq=2 ttl=64 time=171 ms
64 bytes from 10.30.6.2: icmp_seq=3 ttl=64 time=171 ms

--- 10.30.6.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 171.771/171.882/171.955/0.079 ms
mcevik0 commented 3 years ago

Miami DTN hosting SDX and LC-Miami working setup with commit number https://github.com/atlanticwave-sdx/atlanticwave-proto/commit/7d41a16068df379925ea197a865092d9363a0d14

Switch config

miami-corsa# show bridge br25 tunnel
                                                                                         count : 7
  +--------+-----------+-------------+-------+-------+--------+--------+-------+-------+---------+
  | ofport |  ifdescr  |    type     | port  | vlan  | tclass |  tpid  | inner | oper  | v-range |
  +--------+-----------+-------------+-------+-------+--------+--------+-------+-------+---------+
  |      1 | Atlanta   | ctag        |     9 |  1805 |      0 | 0x8100 |     - |  up   | -       |
  |      2 | Chile     | passthrough |    10 |     - |      0 |      - |     - |  up   | -       |
  |      3 | Miami-DTN | passthrough |    12 |     - |      0 |      - |     - |  up   | -       |
  |      4 |           | untagged    |    13 |     - |      0 |      - |     - |  up   | -       |
  |      5 |           | untagged    |    15 |     - |      0 |      - |     - |  up   | -       |
  |     23 |           | passthrough |    23 |     - |      0 |      - |     - |  up   | -       |
  |     24 |           | passthrough |    24 |     - |      0 |      - |     - |  up   | -       |
  +--------+-----------+-------------+-------+-------+--------+--------+-------+-------+---------+
miami-corsa# show openflow flow br25 full
                                                                                                                               count : 19
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  | table | prio  |           match           |          actions          | cookie | packets | bytes | idle t.o. | hard t.o. | duration |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |     0 | -                         | goto_table:1              |    0x0 |     879 | 56428 |         - |         - |  15.521s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=1,dl_vlan=3001    | output:2,output:3         |    0x0 |       0 |     0 |         - |         - |  15.520s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=2,dl_vlan=3001    | output:1,output:3         |    0x0 |       0 |     0 |         - |         - |  15.520s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     0 |   100 | in_port=3,dl_vlan=3001    | output:1,output:2         |    0x0 |       0 |     0 |         - |         - |  15.520s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     1 |     0 | -                         | goto_table:2              |    0x0 |     879 | 56428 |         - |         - |  15.521s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     2 |     0 | -                         | goto_table:3              |    0x0 |     879 | 56428 |         - |         - |  15.521s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     3 |     0 | -                         | goto_table:4              |    0x0 |     879 | 56428 |         - |         - |  15.521s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     3 |     0 | in_port=3                 | CONTROLLER:65509,         |    0x3 |       0 |     0 |         - |         - |  14.044s |
  |       |       |                           | goto_table:4              |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     3 |     0 | in_port=4                 | CONTROLLER:65509,         |    0x4 |       0 |     0 |         - |         - |  14.012s |
  |       |       |                           | goto_table:4              |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     3 |     0 | in_port=5                 | CONTROLLER:65509,         |    0x2 |       0 |     0 |         - |         - |  14.080s |
  |       |       |                           | goto_table:4              |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     4 |     0 | -                         | clear_actions             |    0x0 |       0 |     0 |         - |         - |  15.521s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     4 |     0 | in_port=1                 | clear_actions             |    0x0 |      32 |  2048 |         - |         - |  15.521s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     4 |     1 | in_port=1,                | output:3,output:5,        |    0x1 |       0 |     0 |         - |         - |  14.372s |
  |       |       | dl_dst=ff:ff:ff:ff:ff:ff  | output:2,output:4         |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     4 |     0 | in_port=2                 | clear_actions             |    0x0 |       1 |    68 |         - |         - |  15.521s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     4 |     1 | in_port=2,                | output:1,output:3,        |    0x1 |       0 |     0 |         - |         - |  14.372s |
  |       |       | dl_dst=ff:ff:ff:ff:ff:ff  | output:5,output:4         |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     4 |     0 | in_port=3                 | clear_actions             |    0x0 |       0 |     0 |         - |         - |  15.521s |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     4 |     1 | in_port=3,                | output:1,output:5,        |    0x1 |       0 |     0 |         - |         - |  14.372s |
  |       |       | dl_dst=ff:ff:ff:ff:ff:ff  | output:2,output:4         |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     4 |     1 | in_port=4,                | output:1,output:3,        |    0x1 |       0 |     0 |         - |         - |  14.372s |
  |       |       | dl_dst=ff:ff:ff:ff:ff:ff  | output:5,output:2         |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+
  |     4 |     1 | in_port=5,                | output:1,output:3,        |    0x1 |       0 |     0 |         - |         - |  14.372s |
  |       |       | dl_dst=ff:ff:ff:ff:ff:ff  | output:2,output:4         |        |         |       |           |           |          |
  +-------+-------+---------------------------+---------------------------+--------+---------+-------+-----------+-----------+----------+

Some changes will be applied to the switch tunnels to accommodate port separation for SDXController|LC|DTN with openflow ports. Update: Ports 4 and 5 is created for testing. Their physical counterparts are not significant.

mcevik0 commented 3 years ago

This is the exception when manifest here was used . https://github.com/atlanticwave-sdx/atlanticwave-proto/blob/edc3ea4bc7fea1c4d0770a99a5f2e3a3fa2448f4/configuration/awave-production/awave-production.manifest

Received a INSTL message from 0x7f7ef590b5d0
2020-09-11 17:29:49,962 localcontroller: 140183363348224 DEBUG    install_rule_sdxmsg: 3:191973783352897:EdgePortLCRule: switch 191973783352897, 3:3
install_rule_sdxmsg: 3:191973783352897:EdgePortLCRule: switch 191973783352897, 3:3
Traceback (most recent call last):
  File "LocalController.py", line 889, in <module>
    lc._main_loop()
  File "LocalController.py", line 218, in _main_loop
    self.install_rule_sdxmsg(msg)
  File "LocalController.py", line 645, in install_rule_sdxmsg
    self.rm.add_rule(cookie, switch_id, rule, RULE_STATUS_INSTALLING)
  File "/atlanticwave-proto/localctlr/LCRuleManager.py", line 76, in add_rule
    (cookie, switch_id, str(lcrule)))
LCRuleManager.LCRuleManagerValidationError: Duplicate add_rule for 3:191973783352897:EdgePortLCRule: switch 191973783352897, 3:3
EXIT RECEIVED

Note the ports for sdx and lc and dtn https://github.com/atlanticwave-sdx/atlanticwave-proto/blob/edc3ea4bc7fea1c4d0770a99a5f2e3a3fa2448f4/configuration/awave-production/awave-production.manifest#L135

mcevik0 commented 3 years ago

Build and run ATL Local-controller

Manifest: https://github.com/atlanticwave-sdx/atlanticwave-proto/blob/faca85fe20a3ab39401206494bce3d553fc494dc/configuration/awave-production/awave-production.manifest

Switch:

corsa-sdx-56m# show bridge br25 tunnel
                                                                                                   count : 5
  +--------+---------------------+-------------+-------+-------+--------+--------+-------+-------+---------+
  | ofport |       ifdescr       |    type     | port  | vlan  | tclass |  tpid  | inner | oper  | v-range |
  +--------+---------------------+-------------+-------+-------+--------+--------+-------+-------+---------+
  |      1 | connection to Miami | ctag        |     1 |  1805 |      0 | 0x8100 |     - |  up   | -       |
  |      2 | connection to Chile | ctag        |     1 |  1806 |      0 | 0x8100 |     - |  up   | -       |
  |      5 | SOX_DTN             | ctag        |     1 |  3922 |      0 | 0x8100 |     - |  up   | -       |
  |     29 |                     | passthrough |    29 |     - |      0 |      - |     - |  up   | -       |
  |     30 |                     | passthrough |    30 |     - |      0 |      - |     - |  up   | -       |
  +--------+---------------------+-------------+-------+-------+--------+--------+-------+-------+---------+

Run ATL Local-controller:
Corsa-VFC cannot connect to the OpenFlow controller on the awsdx-app (128.61.149.224)

[root@awsdx-app ~]# ./run-lc.sh 

============================================================================== 
--- Run Docker Container for TYPE: lc - MODE: attached - SITE: atl - CONFIGURATION: awave-production - MANIFEST: awave-production.manifest
============================================================================== 
--- /root/aw.sh - SITE: atl 
--- /root/aw.sh - MODE: attached 
--- /root/aw.sh - CONFIG: 
--- /root/aw.sh - MANIFEST: 
--- /root/aw.sh - SITE: atl 
--- /root/aw.sh - MODE: attached 
--- /root/aw.sh - CONFIG: awave-production
--- /root/aw.sh - MANIFEST: awave-production.manifest
++ SITE=atl
++ export SITE
++ echo '--- ./start-lc-controller.sh - SITE: atl'
--- ./start-lc-controller.sh - SITE: atl
++ MODE=attached
++ echo '--- ./start-lc-controller.sh - MODE: attached'
--- ./start-lc-controller.sh - MODE: attached
++ CONFIG=awave-production
++ echo '--- ./start-lc-controller.sh - CONFIG: awave-production'
--- ./start-lc-controller.sh - CONFIG: awave-production
++ MANIFEST=awave-production.manifest
++ echo '--- ./start-lc-controller.sh - MANIFEST: awave-production.manifest'
--- ./start-lc-controller.sh - MANIFEST: awave-production.manifest
++ '[' attached == detached ']'
++ OPTS=it
++ SDXIPVAL=10.30.1.254
++ export SDXIPVAL
++ case ${SITE} in
++ RYU_PORT=6683
++ LC_SITE=atlctlr
++ echo '--- ./start-lc-controller.sh - LC_SITE: atlctlr'
--- ./start-lc-controller.sh - LC_SITE: atlctlr
++ echo '--- ./start-lc-controller.sh - RYU_PORT: 6683'
--- ./start-lc-controller.sh - RYU_PORT: 6683
++ cd atlanticwave-proto/localctlr/
+++ docker ps -a -f name=atlctlr -q
++ LC_CONTAINER=
++ [[ -n '' ]]
++ docker volume rm atlanticwave-proto
atlanticwave-proto
++ docker volume create atlanticwave-proto
atlanticwave-proto
++ docker run --rm --network host -v atlanticwave-proto:/atlanticwave-proto -e MANIFEST=/awave-production.manifest -e SITE=atlctlr -e SDXIP=10.30.1.254 -p 6683:6683 -it --name=atlctlr lc_container
WARNING: Published ports are discarded when using host network mode
Site for LC: atlctlr
Manifest file: /awave-production.manifest
SDXIP: 10.30.1.254
Already up-to-date.
Namespace(database=':memory:', host='10.30.1.254', manifest='/awave-production.manifest', name='atlctlr', sdxport=5555)
2020-09-11 22:12:59,766 localcontroller: 140233217324800 INFO     LocalController atlctlr starting
LocalController atlctlr starting
2020-09-11 22:12:59,767 localcontroller: 140233217324800 CRITICAL Connection to DB: :memory:
Connection to DB: :memory:
2020-09-11 22:12:59,775 localcontroller: 140233217324800 INFO     Failed to load config_table from DB, creating table
Failed to load config_table from DB, creating table
Creating table: atlctlr-config on Engine(sqlite:///:memory:)
2020-09-11 22:12:59,777 localcontroller: 140233217324800 INFO     Opening config file None
Opening config file None
2020-09-11 22:12:59,778 localcontroller: 140233217324800 WARNING  exception when opening config file: coercing to Unicode: need string or buffer, NoneType found
exception when opening config file: coercing to Unicode: need string or buffer, NoneType found
2020-09-11 22:12:59,778 localcontroller: 140233217324800 INFO     Opening manifest file /awave-production.manifest
Opening manifest file /awave-production.manifest
2020-09-11 22:12:59,778 localcontroller: 140233217324800 INFO     Successfully opened manifest file /awave-production.manifest
Successfully opened manifest file /awave-production.manifest
2020-09-11 22:12:59,778 localcontroller: 140233217324800 INFO     Adding new manifest filename /awave-production.manifest
Adding new manifest filename /awave-production.manifest
Creating column: value (<class 'sqlalchemy.sql.sqltypes.UnicodeText'>) on 'atlctlr-config'
Context impl SQLiteImpl.
Will assume non-transactional DDL.
Creating column: key (<class 'sqlalchemy.sql.sqltypes.UnicodeText'>) on 'atlctlr-config'
Context impl SQLiteImpl.
Will assume non-transactional DDL.
2020-09-11 22:12:59,787 localcontroller: 140233217324800 INFO     Adding new Ryu configuration {'ryucxninternalport': 55783, 'openflowport': 6683}
Adding new Ryu configuration {'ryucxninternalport': 55783, 'openflowport': 6683}
2020-09-11 22:12:59,788 localcontroller: 140233217324800 INFO     Adding new LC configuration {'lcip': u'10.30.1.3'}
Adding new LC configuration {'lcip': u'10.30.1.3'}
2020-09-11 22:12:59,789 localcontroller: 140233217324800 INFO     Adding new SDX configuration {'sdxip': '10.30.1.254', 'sdxport': 5555}
Adding new SDX configuration {'sdxip': '10.30.1.254', 'sdxport': 5555}
2020-09-11 22:12:59,790 localcontroller: 140233217324800 INFO     Adding new internal_config for DPID 244135703769933
Adding new internal_config for DPID 244135703769933
2020-09-11 22:12:59,790 localcontroller.lcrulemanager: 140233217324800 CRITICAL Connection to DB: :memory:
Connection to DB: :memory:
2020-09-11 22:12:59,793 localcontroller.lcrulemanager: 140233217324800 INFO     Failed to load rule_table from DB, creating table
Failed to load rule_table from DB, creating table
Creating table: lcrules on Engine(sqlite:///:memory:)
2020-09-11 22:12:59,794 localcontroller.lcrulemanager: 140233217324800 WARNING  LCRuleManager initialized: 0x7f8a9109dfd0
LCRuleManager initialized: 0x7f8a9109dfd0
2020-09-11 22:12:59,794 localcontroller.interryucontrollercxnmgr: 140233217324800 WARNING  InterRyuControllerConnectionManager initialized: 0x7f8a911fbf50
InterRyuControllerConnectionManager initialized: 0x7f8a911fbf50
2020-09-11 22:12:59,795 localcontroller.ryucontrollerinterface: 140233109550848 DEBUG    RyuControllerInterface: Starting inter_cm_thread: 10.30.1.3:55783
RyuControllerInterface: Starting inter_cm_thread: 10.30.1.3:55783
2020-09-11 22:12:59,795 localcontroller.interryucontrollercxnmgr: 140233109550848 CRITICAL Opening listening socket: 10.30.1.3:55783
Opening listening socket: 10.30.1.3:55783
2020-09-11 22:12:59,796 localcontroller.ryucontrollerinterface: 140233217324800 DEBUG    About to start ryu-manager.
About to start ryu-manager.
2020-09-11 22:12:59,796 localcontroller.ryucontrollerinterface: 140233217324800 DEBUG    ENV
ENV
2020-09-11 22:12:59,800 localcontroller.ryucontrollerinterface: 140233217324800 DEBUG    ENV - shell=True
HOSTNAME=awsdx-app.cloud.rnoc.gatech.edu
TERM=xterm
SITE=atlctlr
ENV - shell=True
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/atlanticwave-proto/localctlr
MANIFEST=/awave-production.manifest
SDXIP=10.30.1.254
SHLVL=1
HOME=/root
PYTHONPATH=:.:/atlanticwave-proto
OLDPWD=/atlanticwave-proto
_=/usr/bin/python
HOSTNAME=awsdx-app.cloud.rnoc.gatech.edu
SDXIP=10.30.1.254
SHLVL=1
HOME=/root
OLDPWD=/atlanticwave-proto
_=/usr/bin/python
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/atlanticwave-proto/localctlr
PYTHONPATH=:.:/atlanticwave-proto
MANIFEST=/awave-production.manifest
SITE=atlctlr
2020-09-11 22:12:59,807 localcontroller.ryucontrollerinterface: 140233217324800 DEBUG    Started ryu-manager.
Started ryu-manager.
lzma module is not available
Registered VCS backend: git
Registered VCS backend: hg
Registered VCS backend: svn
Registered VCS backend: bzr
loading app /atlanticwave-proto/localctlr/RyuTranslateInterface.py
loading app ryu.controller.ofp_handler
instantiating app /atlanticwave-proto/localctlr/RyuTranslateInterface.py of RyuTranslateInterface
2020-09-11 22:13:00,612 localcontroller.ryutranslateinterface: 139806127571664 WARNING  Starting up RyuTranslateInterface
Starting up RyuTranslateInterface
2020-09-11 22:13:00,612 localcontroller.ryutranslateinterface: 139806127571664 CRITICAL Connection to DB: :memory:
Connection to DB: :memory:
Failed to load atlctlr-config from DB, creating new table
Creating table: atlctlr-config on Engine(sqlite:///:memory:)
Creating table: atlctlr-rule on Engine(sqlite:///:memory:)
Opening config file /awave-production.manifest
Adding new lcip 10.30.1.3
Creating column: value (<class 'sqlalchemy.sql.sqltypes.UnicodeText'>) on 'atlctlr-config'
Context impl SQLiteImpl.
Will assume non-transactional DDL.
Creating column: key (<class 'sqlalchemy.sql.sqltypes.UnicodeText'>) on 'atlctlr-config'
Context impl SQLiteImpl.
Will assume non-transactional DDL.
Updating ryucxnport 55783
Adding new internal_config for DPID 244135703769933
InterRyuControllerConnectionManager initialized: 0x7f27216b1ed0
RyuTranslateInterface: Opening outbound connection to RyuConnectionInterface on 10.30.1.3:55783
Connecting to 10.30.1.3:55783
Connection established! 10.30.1.3:55783 <eventlet.greenio.base.GreenSocket object at 0x7f27216b1f10>
Looking for datapath
Waiting {}
2020-09-11 22:13:00,639 localcontroller.ryutranslateinterface: 139806127571664 WARNING  RyuTranslateInterface initialized: 0x7f27216f6c50
RyuTranslateInterface initialized: 0x7f27216f6c50
instantiating app ryu.controller.ofp_handler of OFPHandler
BRICK atlctlr
  CONSUMES EventOFPErrorMsg
  CONSUMES EventOFPPacketIn
  CONSUMES EventOFPSwitchFeatures
BRICK ofp_event
  PROVIDES EventOFPErrorMsg TO {'atlctlr': set(['main', 'config'])}
  PROVIDES EventOFPPacketIn TO {'atlctlr': set(['main'])}
  PROVIDES EventOFPSwitchFeatures TO {'atlctlr': set(['config'])}
  CONSUMES EventOFPEchoRequest
  CONSUMES EventOFPPortDescStatsReply
  CONSUMES EventOFPHello
  CONSUMES EventOFPErrorMsg
  CONSUMES EventOFPPortStatus
  CONSUMES EventOFPEchoReply
  CONSUMES EventOFPSwitchFeatures
Waiting {}
Waiting {}
Waiting {}
Waiting {}
mcevik0 commented 3 years ago

Regarding the problem on ATL side , is there a firewall inside campus that prevents traffic on port tcp/6683 ?

corsa-sdx-56m# show bridge br25 controller
                                                                                 count : 1
  +----------+----------------+-------+-------+-----------+----------------------+-------+
  |   name   |       ip       | port  |  tls  | connected |        status        | role  |
  +----------+----------------+-------+-------+-----------+----------------------+-------+
  | CONTbr25 | 128.61.149.224 |  6683 |  no   |    no     | Connection timed out | other |
  +----------+----------------+-------+-------+-----------+----------------------+-------+

I tested connection from the other VM (awsdx-ctrl) to current controller, it works.

[root@awsdx-ctrl ~]# nc -v 128.61.149.224 6683
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 128.61.149.224:6683.
1?

Traffic from the Corsa switch to the OpenFlow controller on (128.61.149.224) is interrupted somewhere. @russclarkgt - we need your input.

mcevik0 commented 3 years ago

@russclarkgt , @jab1982 - I am looking at the drawing located at https://renci.slack.com/archives/CRS2KPHFV/p1599079286029300 Currently VM awsdx-app seems to be connected to the Corsa switch. However, according to the previous demo doc, the other VM awsdx-ctrl seems to be the controller for the demo setup. I see traces of firewall rules on the server for openflow traffic as well as an interface on your (probably protected) network

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 143.215.216.21  netmask 255.255.255.0  broadcast 143.215.216.255
        inet6 fe80::21a:4aff:fe10:2a  prefixlen 64  scopeid 0x20<link>
        ether 00:1a:4a:10:00:2a  txqueuelen 1000  (Ethernet)
        RX packets 1062812  bytes 71758066 (68.4 MiB)
        RX errors 0  dropped 14  overruns 0  frame 0
        TX packets 196935  bytes 9986426 (9.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
mcevik0 commented 3 years ago

@russclarkgt @jab1982 - And please see below. I added a controller on the awsdx-ctrl and VFC is connected to it through the 143.XXXX network. However drawing referenced above shows awsdx-app is connected to the Corsa switch.

corsa-sdx-56m# show bridge br25 controller 
                                                                                  count : 2
  +-----------+----------------+-------+-------+-----------+----------------------+-------+
  |   name    |       ip       | port  |  tls  | connected |        status        | role  |
  +-----------+----------------+-------+-------+-----------+----------------------+-------+
  | CONTbr25  | 128.61.149.224 |  6683 |  no   |    no     | Connection timed out | other |
  | CONTbr251 | 143.215.216.21 |  6653 |  no   |    yes    |                      | other |
  +-----------+----------------+-------+-------+-----------+----------------------+-------+

Can you please let me know which server is the right one to use over which network and interface?