a10networks / acos-client

ACOS API Client
Apache License 2.0
35 stars 61 forks source link

STACK-1896 Feature Source NAT flavor support #320

Closed ytsai-a10 closed 3 years ago

ytsai-a10 commented 3 years ago

Description

please also check a10-octavia PR: https://github.com/a10networks/a10-octavia/pull/281 Feature Requirements

Design Document: https://teams.microsoft.com/l/file/27F4222F-7CDD-421B-8F0C-97A5C60B2B5D?tenantId=91d27ab9-8c5e-41d4-82e8-3d1bf81fcb2f&fileType=docx&objectUrl=https%3A%2F%2Fa10networks.sharepoint.com%2Fsites%2FOpenstack%2FShared%20Documents%2FDevelopment%2FResearch%20%26%20Design%2FSNAT%20Pools%2FSNAT%20Pools%20Design%20Document.docx&baseUrl=https%3A%2F%2Fa10networks.sharepoint.com%2Fsites%2FOpenstack&serviceName=teams&threadId=19:0f1acbb173d74758b05ee8bacb000d68@thread.tacv2&groupId=37bbf3ad-c05a-4e67-b0cc-12bcd1479beb Story: https://a10networks.atlassian.net/browse/STACK-1896

Jira Ticket

https://a10networks.atlassian.net/browse/STACK-1896

Technical Approach

Check design document for detail.


1. Add flavor schema for nat-pool and nat-pool-list in api/drivers/flavor_schema.py 
2. show flavor support and validate flavor data in api/drivers/driver.py 
3. In a10-octavia, add new task class for nat pool creation 
  - create pool according to the flavor 
  -  if exist but range is different -> failed 
  -  if exist but range is same -> pass 
4. The NAT pool creation task should also create nat pools in nat-pool-list 
5. Add the nat pool creation task to loadbalancer create flow 
6. For listener create/set, it should select nat pool with the priority nat-pool flavor < virtual-port flavor < virtual-port flavor regex 
7. Handle update cases: 
   - Set command for loadbalancer.  After update loadbalancer, nat pool should still configure properly on Thunder. 
   - Set command for listener. For listener update, after update Thunder should still use the proper source-nat value  
8. Delete the NAT pools in nat-pool and nat-pool-list flavor when loadbalancer is deleted. 
    -  If nat pool is reference by others, the loadbalancer delete flow should still success. (leave the nat pool for other loadbalancer to delete) And we raise a warning in this case. 
9. Rrefactor nat.py in acos-client (like other slb objects in acos-client) 

Config Changes

This is only required if the config has been updated in this PR

Add changes in bold

(Example Snippet - Remove This Before Submission)

[rack_vthunder]

devices=[
 {
 "project_id": "fake_uuid",
 "username" : "username",
 "password" : "password",
 "ip_address" : "10.43.12.137",
 "device_name" : "rack_1"
 "example": "change",
 },
 }]

Test Cases

  1. Reject flavor data with invalid format or manditory key missing
  2. NAT pool should be created on Thunder when loadbalancer is created.
  3. If NAT pool name already used in Thunder
    • Return error when pool content is different.
    • Pass the creation when pool content is the same.
  4. All listener will use the NAT pool created by flavor in this loadbalancer
  5. For NAT pool selection
    • virtual-port flavor has higher priority than nat-pool flavor
    • Virtual-pool flavor regex has higher priority than virtual-port flavor
  6. Listener can use NAT pool that already on Thunder
  7. Thunder will create NAT pools that specified in nat-pool-list.
  8. listener can use pools in nat-pool-list by specify the pool in virtual-port flavor or virtual-port flavor regex.

Manual Testing

Please check detail test steps and logs in RD unit-test document: https://teams.microsoft.com/l/file/DA35D246-0822-42C7-BFF6-1562535E3802?tenantId=91d27ab9-8c5e-41d4-82e8-3d1bf81fcb2f&fileType=docx&objectUrl=https%3A%2F%2Fa10networks.sharepoint.com%2Fsites%2FOpenstack%2FShared%20Documents%2FDevelopment%2FResearch%20%26%20Design%2FSNAT%20Pools%2FSNAT%20Flavor%20Support%20Unit%20Test%20Notes.docx&baseUrl=https%3A%2F%2Fa10networks.sharepoint.com%2Fsites%2FOpenstack&serviceName=teams&threadId=19:0f1acbb173d74758b05ee8bacb000d68@thread.tacv2&groupId=37bbf3ad-c05a-4e67-b0cc-12bcd1479beb

openstack loadbalancer flavorprofile create --name fp_snat_list --provider a10 --flavor-data '{"nat-pool":{"pool-name":"pool1", "start-address":"172.16.1.101", "end-address":"172.16.1.102", "netmask":"/24", "gateway":"172.16.1.1"}, "nat-pool-list":[{"pool-name":"pool2", "start-address":"172.16.2.101", "end-address":"172.16.2.102", "netmask":"/24", "gateway":"172.16.2.1"}, {"pool-name":"pool3", "start-address":"172.16.3.101", "end-address":"172.16.3.102", "netmask":"/24", "gateway":"172.16.3.1"}]}'
openstack loadbalancer flavor create --name f_snat_list --flavorprofile fp_snat_list --description "flaovr all test1" --enable
openstack loadbalancer flavorprofile create --name fp_snat_list --provider a10 --flavor-data '{"nat-pool":{"pool-name":"pool4", "start-address":"172.17.1.101", "end-address":"172.17.1.102", "netmask":"/24", "gateway":"172.17.1.1"}}'
openstack loadbalancer flavor create --name f_snat --flavorprofile fp_snat --description "flaovr all test2" --enable
openstack loadbalancer flavorprofile create --name fp_snat_vport --provider a10 --flavor-data '{"nat-pool":{"pool-name":"pool4", "start-address":"172.17.1.101", "end-address":"172.17.1.102", "netmask":"/24", "gateway":"172.17.1.1"}, "virtual-port": {"name-expressions": [{"regex": "vport2", "json": {"pool": "pool2"}}]}}'
openstack loadbalancer flavor create --name f_snat_vport --flavorprofile fp_snat_vport --description "flaovr all test3" --enable
Case1:
Create lb1 with flavor f_snat_list
openstack loadbalancer create --flavor f_snat_list --vip-subnet-id 11dcb328-f947-4b5f-a789-5ae46118bb81 --name lb1
Result --> lb1 and pool1, pool2, pool3 created
Create lb2 with flavor f_snat
openstack loadbalancer create --flavor f_snat --vip-subnet-id 11dcb328-f947-4b5f-a789-5ae46118bb81 --name lb2
Result --> lb2 and pool4 created

Delete lb1
openstack loadbalancer delete lb1
Result --> lb1 and nat-pools pool1, pool2 and pool3 got deleted
Delete lb2
openstack loadbalancer delete lb2
Result --> lb2 and nat-pool pool4 got deleted

Case2:
Create lb1 with flavor f_snat_list
openstack loadbalancer create --flavor f_snat_list --vip-subnet-id 11dcb328-f947-4b5f-a789-5ae46118bb81 --name lb1
Result --> lb1 and pool1, pool2, pool3 created
Create lb2 with flavor f_snat_list
openstack loadbalancer create --flavor f_snat_list --vip-subnet-id 11dcb328-f947-4b5f-a789-5ae46118bb81 --name lb2
Result --> lb2 created
Delete lb1
openstack loadbalancer delete lb1
Result --> lb1 got deleted with following warning:
WARNING a10_octavia.controller.worker.tasks.nat_pool_tasks [-] Cannot delete Nat-pool(s) in flavor 1d874007-1045-499f-8821-6b396ec79b0c as they are in use by another loadbalancer(s)
Delete lb2
openstack loadbalancer delete lb2
Result --> lb2 and nat-pools pool1, pool2 and pool3 got deleted

Case3:
Create lb1 with flavor f_snat_list
openstack loadbalancer create --flavor f_snat_list --vip-subnet-id 11dcb328-f947-4b5f-a789-5ae46118bb81 --name lb1
Result --> lb1 and pool1, pool2, pool3 created
Create lb2 with flavor f_snat_vport
openstack loadbalancer create --flavor f_snat_vport --vip-subnet-id 11dcb328-f947-4b5f-a789-5ae46118bb81 --name lb2
Result --> lb2 and pool4 created
Create a listener "vport2" for lb2
openstack loadbalancer listener create --protocol tcp --protocol-port 80 --name vport2 lb2
Result --> listener vport2 created with having source-nat pool pool2 configured to it.
Delete lb1
openstack loadbalancer delete lb1
Result --> lb1 and pool1 and pool3 got deleted with following error message:
ERROR a10_octavia.controller.worker.tasks.nat_pool_tasks [-] Failed to delete Nat-pool with name pool2 due to 654376991 This NAT pool is referenced by a Virtual Port. Please remove the binding first.: ACOSException: 654376991 This NAT pool is referenced by a Virtual Port. Please remove the binding first.
Delete vport2
Delete lb2
Result --> vport2, lb2 and pool4 and pool2 got deleted