oomichi / try-kubernetes

12 stars 5 forks source link

Enable LBaaS on OpenStack IaaS #68

Closed oomichi closed 5 years ago

oomichi commented 5 years ago

Kubernetes で LB Service を使うために、まずは LBaaS を OpenStack IaaS 上で有効にする。

関連

Octaviaの有効方法

  1. Create Octavia service user
    $ openstack user create --domain default --password OCTAVIA_PASS octavia 
    $ openstack role add --project service --user octavia admin

    2.1. Create Octavia Neutron management network

    $ openstack network create lb-mgmt-net
    $ openstack subnet create --subnet-range 192.168.10.0/24 --allocation-pool start=192.168.10.2,end=192.168.10.200 --network lb-mgmt-net lb-mgmt-subnet

    2.2. Create Octavia security groups 作成した lb-mgmt-sec-grp のIDは設定ファイルの controller_worker.amp_secgroup_list に設定する。 作成した lb-health-mgr-sec-grp は lb-mgmt-net 上の Port octavia-health-manager-$OCTAVIA_NODE-listen-port に使われる。

    
    $ openstack security group create lb-mgmt-sec-grp
    $ openstack security group rule create --protocol icmp lb-mgmt-sec-grp
    $ openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp
    $ openstack security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp

$ openstack security group create lb-health-mgr-sec-grp $ openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp

3. Create an Amphora image

$ sudo apt-get install python-pip $ sudo pip install diskimage-builder $ sudo apt-get install qemu $ git clone https://github.com/openstack/octavia $ cd octavia/ $ ./diskimage-create/diskimage-create.sh ... 2019-03-02 02:56:01.510 | Converting image using qemu-img convert 2019-03-02 02:57:16.436 | Image file /home/ubuntu/octavia/amphora-x64-haproxy.qcow2 created... 2019-03-02 02:57:16.642 | Build completed successfully $ $ ls amphora-x64-haproxy.qcow2 amphora-x64-haproxy.qcow2

4. Create a Nova flavor for the amphorae
作成した flavor は設定ファイルの  controller_worker.amp_flavor_id で指定する。

$ openstack flavor create --id auto --ram 1024 --disk 2 --vcpus 1 --private m1.amphora

5. Upload the Amphora image into Glance

$ openstack image create --disk-format qcow2 --container-format bare --file ./amphora-x64-haproxy.qcow2 amphora-x64-haproxy

6. Tag the Amphora image with 'amphora'
指定した Tag: amphora は設定ファイルの controller_worker.amp_image_tag で指定する。

$ openstack image set --tag amphora amphora-x64-haproxy

7. Install the Octavia software
Ubuntu 16.04では Octavia のパッケージが用意されていないため、ここでは Pip でインストールする
他のコンポーネントは Pike のため、https://releases.openstack.org/teams/octavia.html を参考に Octavia Pike バージョンの 1.0.4 を指定する。

$ sudo apt-get install python-pip $ sudo pip install octavia==1.0.4

8. Create TLS certification for communicating with the amphorae

$ sudo useradd octavia $ sudo mkdir /etc/octavia $ sudo chown octavia:octavia /etc/octavia $ cd /tmp/ $ git clone http://github.com/openstack/octavia $ cd octavia $ sudo -u octavia bin/create_certificates.sh /etc/octavia/certs /tmp/octavia/etc/certificates/openssl.cnf

9. Create SSH keys for communicating with the amphorae

$ sudo -u octavia mkdir /etc/octavia/.ssh $ sudo -u octavia ssh-keygen -b 2048 -t rsa -N "" -f /etc/octavia/.ssh/octavia_ssh_key

10. Add the SSH keypair to Nova
作成した keypair 名は設定項目 controller_worker.amp_ssh_key_name で指定する。

$ openstack keypair create --public-key /etc/octavia/.ssh/octavia_ssh_key.pub octavia_ssh_key

11. Configure Octavia

$ sudo -u octavia cp /tmp/octavia/etc/octavia.conf /etc/octavia/ $ sudo -u octavia vi /etc/octavia/octavia.conf --- /etc/octavia/octavia.conf.orig 2019-03-08 18:18:05.677071815 -0800 +++ /etc/octavia/octavia.conf 2019-03-11 15:28:57.201571958 -0700 @@ -1,6 +1,6 @@ [DEFAULT]

Print debugging output (set logging level to DEBUG instead of default WARNING level).

-# debug = False +debug = True

Plugin options are hot_plug_plugin (Hot-pluggable controller plugin)

octavia_plugins = hot_plug_plugin

@@ -14,10 +14,10 @@

transport_url = rabbit://user>:<pass>@127.0.0.1:5672/<vhost

For HA, specify queue nodes in cluster, comma delimited:

transport_url = rabbit://user>:<pass>@server01,<user:pass>@server02/<vhost

-# transport_url = +transport_url = rabbit://openstack:RABBIT_PASS@iaas-ctrl

[api_settings] -# bind_host = 127.0.0.1 +bind_host = 0.0.0.0

bind_port = 9876

api_handler = queue_producer

@@ -61,7 +61,7 @@

Replace 127.0.0.1 above with the IP address of the database used by the

main octavia server. (Leave it as is if the database runs on this host.)

-# connection = mysql+pymysql:// +connection = mysql+pymysql://octavia:OCTAVIA_DBPASS@iaas-ctrl/octavia

NOTE: In deployment the [database] section and its connection attribute may

be set in the corresponding core plugin '.ini' file. However, it is suggested

@@ -72,7 +72,7 @@

bind_ip = 127.0.0.1

bind_port = 5555

controller_ip_port_list example: 127.0.0.1:5555, 127.0.0.1:5555

-# controller_ip_port_list = +controller_ip_port_list = iaas-ctrl:5555

failover_threads = 10

status_update_threads will default to the number of processors on the host.

This setting is deprecated and if you specify health_update_threads and

@@ -108,14 +108,16 @@

The www_authenticate_uri is the public endpoint and is returned in headers on a 401

www_authenticate_uri = https://localhost:5000/v3

The auth_url is the admin endpoint actually used for validating tokens

-# auth_url = https://localhost:5000/v3 -# username = octavia -# password = password -# project_name = service +auth_url = http://iaas-ctrl:5000 +memcached_servers = iaas-ctrl:11211 +auth_type = password +username = octavia +password = OCTAVIA_PASS +project_name = service

Domain names must be set, these are not default but work for most clouds

-# project_domain_name = Default -# user_domain_name = Default +project_domain_name = default +user_domain_name = default

insecure = False

cafile =

@@ -221,15 +223,15 @@

Glance parameters to extract image ID to use for amphora. Only one of

parameters is needed. Using tags is the recommended way to refer to images.

amp_image_id =

-# amp_image_tag = +amp_image_tag = amphora

Optional owner ID used to restrict glance images to one owner ID.

This is a recommended security setting.

amp_image_owner_id =

Nova parameters to use when booting amphora

-# amp_flavor_id = +amp_flavor_id = 7ef1311f-4b1a-4414-afc1-d77a06d2eafe

Upload the ssh key as the service_auth user described elsewhere in this config.

Leaving this variable blank will install no ssh key on the amphora.

-# amp_ssh_key_name = +amp_ssh_key_name = octavia_ssh_key

Networks to attach to the Amphorae examples:

- One primary network

@@ -237,25 +239,25 @@

- Multiple networks

- - amp_boot_network_list = 11111111-2222-33333-4444-555555555555, 22222222-3333-4444-5555-666666666666

- All networks defined in the list will be attached to each amphora

-# amp_boot_network_list = +amp_boot_network_list = a1859074-5431-4c72-9134-59c6f6a3470d

-# amp_secgroup_list = +amp_secgroup_list = 892aafd0-cd72-4b65-98c7-3375e4d9d36b

client_ca = /etc/octavia/certs/ca_01.pem

Amphora driver options are amphora_noop_driver,

amphora_haproxy_rest_driver

# -# amphora_driver = amphora_noop_driver +amphora_driver = amphora_haproxy_rest_driver #

Compute driver options are compute_noop_driver

compute_nova_driver

# -# compute_driver = compute_noop_driver +compute_driver = compute_nova_driver #

Network driver options are network_noop_driver

allowed_address_pairs_driver

# -# network_driver = network_noop_driver +network_driver = allowed_address_pairs_driver #

Distributor driver options are distributor_noop_driver

single_VIP_amphora

@@ -280,7 +282,7 @@

rpc_thread_pool_size = 2

Topic (i.e. Queue) Name

-# topic = octavia_prov +topic = octavia_prov

Topic for octavia's events sent to a queue

event_stream_topic = neutron_lbaas_event

12. Create database for Octavia

$ sudo mysql

CREATE DATABASE octavia CHARACTER SET utf8; GRANT ALL PRIVILEGES ON octavia. TO 'octavia'@'localhost' IDENTIFIED BY 'OCTAVIA_DBPASS'; GRANT ALL PRIVILEGES ON octavia. TO 'octavia'@'%' IDENTIFIED BY 'OCTAVIA_DBPASS'; exit $ $ sudo vi /etc/octavia/octavia.conf

  • connection =

  • connection = mysql+pymysql://octavia:OCTAVIA_DBPASS@iaas-ctrl/octavia $ $ sudo -u octavia /usr/local/bin/octavia-db-manage upgrade head
    13. Launch Octavia controller
    TODO: Use service scripts

    $ /usr/local/bin/octavia-api & $ /usr/local/bin/octavia-health-manager & $ /usr/local/bin/octavia-housekeeping & $ /usr/local/bin/octavia-worker &

    14. Configure Neutron

    $ sudo vi /etc/neutron/neutron_lbaas.conf --- /etc/neutron/neutron_lbaas.conf.orig 2019-03-11 15:57:15.310733549 -0700 +++ /etc/neutron/neutron_lbaas.conf 2019-03-11 15:59:17.712574620 -0700 @@ -211,4 +211,4 @@

Defines providers for advanced services using the format:

::[:default] (multi valued)

-service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default +service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default $ $ sudo vi /etc/neutron/neutron.conf --- /etc/neutron/neutron.conf.orig 2019-03-11 16:00:29.585655677 -0700 +++ /etc/neutron/neutron.conf 2019-03-11 16:01:22.390449925 -0700 @@ -30,3 +30,5 @@ [agent] root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

+[octavia] +base_url=http://iaas-ctrl:9876

15. Test Octavia

$ sudo apt-get install python-octaviaclient


0. Create Octavia service and endpoints

$ openstack service create --name octavia --description "Octavia Load Balancing Service" load-balancer $ openstack endpoint create --region RegionOne load-balancer public http://iaas-ctrl:9876 $ openstack endpoint create --region RegionOne load-balancer internal http://iaas-ctrl:9876 $ openstack endpoint create --region RegionOne load-balancer admin http://iaas-ctrl:9876

oomichi commented 5 years ago

https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/openstack-kubernetes-integration-options.md#external-openstack-provider からすると Octavia ではなく Neutron LBaaS v2 が適当?

Scenarios tested:

    External LBaaS with Neutron LBaaSv2
    Internal LBaaS with Neutron LBaaSv2
    LVM / iSCSI with Cinder
    Ceph / RBD with Cinder

TODO:

    Test LBaaS scenarios with Octavia
oomichi commented 5 years ago

うーん、https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-octavia-ingress-controller.md によると Catalyst Cloud では Octavia を使っているようだ。

oomichi commented 5 years ago

octavia-ingress-controller を解析する。 OpenStack上で1つのLBを作成し、Kubernetes側では複数のServicesでそれを共有する(Ingress)

oomichi commented 5 years ago

k8s.io/cloud-provider-openstack/pkg/ingress/cmd が実装コード controller.NewController().Start()

 40 // rootCmd represents the base command when called without any subcommands
 41 var rootCmd = &cobra.Command{
 42         Use:   "ingress-openstack",
 43         Short: "Ingress controller for OpenStack",
 44         Long:  `Ingress controller for OpenStack`,
 45
 46         Run: func(cmd *cobra.Command, args []string) {
 47                 osIngress := controller.NewController(conf)
 48                 osIngress.Start()
 49
 50                 sigterm := make(chan os.Signal, 1)
 51                 signal.Notify(sigterm, syscall.SIGTERM)
 52                 signal.Notify(sigterm, syscall.SIGINT)
 53                 <-sigterm
 54         },
 55 }

k8s.io/cloud-provider-openstack/pkg/ingress/controller

oomichi commented 5 years ago

Octaviaの有効方法

そもそも商用環境の場合、LBはアプライアンス製品などを使うのでその操作方法は異なる。 DevStackの場合、ソフトウェアLBであるHAProxyのVMをOctaviaのLBオブジェクト毎に立ち上げ、必要な設定を行っている。 openstack/octavia/devstack/settings

 44 OCTAVIA_AMP_FLAVOR_ID=${OCTAVIA_AMP_FLAVOR_ID:-"10"}
 45 OCTAVIA_AMP_IMAGE_NAME=${OCTAVIA_AMP_IMAGE_NAME:-"amphora-x64-haproxy"}
 46 OCTAVIA_AMP_IMAGE_FILE=${OCTAVIA_AMP_IMAGE_FILE:-${OCTAVIA_DIR}/diskimage-create/${OCTAVIA_AMP_IMAGE_NAME}.qcow2}
 47 OCTAVIA_AMP_IMAGE_TAG="amphora"
oomichi commented 5 years ago

HAProxy amphoraイメージの作成 https://github.com/oomichi/try-kubernetes/issues/76

oomichi commented 5 years ago

イメージの作成後、upload_imageでGlanceに openstack image create でイメージを登録する。

$ ls amphora-x64-haproxy.qcow2
amphora-x64-haproxy.qcow2
$ openstack image create --disk-format qcow2 --container-format bare --file ./amphora-x64-haproxy.qcow2 amphora-x64-haproxy
$
$ openstack image list
+--------------------------------------+---------------------+--------+
| ID                                   | Name                | Status |
+--------------------------------------+---------------------+--------+
| 60408625-0466-4a31-9246-31dcac191cc9 | CentOS-7-x86_64     | active |
| 73f70800-1d0c-4569-a3c5-29c70775c334 | Ubuntu-16.04-x86_64 | active |
| 5477210b-a550-4512-a1b1-387cd9e5b715 | amphora-x64-haproxy | active |
+--------------------------------------+---------------------+--------+
oomichi commented 5 years ago

create_octavia_accounts

176 function create_octavia_accounts {
177     create_service_user $OCTAVIA
178
179     # Increase the service account secgroups quota
180     # This is imporant for concurrent tempest testing
181     openstack quota set --secgroups 100 $SERVICE_PROJECT_NAME
182
183     local octavia_service=$(get_or_create_service "octavia" \
184         $OCTAVIA_SERVICE_TYPE "Octavia Load Balancing Service")
185
186     if [[ "$WSGI_MODE" == "uwsgi" ]] && [[ "$OCTAVIA_NODE" == "main" ]] ; then
187         get_or_create_endpoint $octavia_service \
188             "$REGION_NAME" \
189             "$OCTAVIA_PROTOCOL://$SERVICE_HOST:$OCTAVIA_PORT/$OCTAVIA_SERVICE_TYPE" \
190             "$OCTAVIA_PROTOCOL://$SERVICE_HOST:$OCTAVIA_PORT/$OCTAVIA_SERVICE_TYPE" \
191             "$OCTAVIA_PROTOCOL://$SERVICE_HOST:$OCTAVIA_PORT/$OCTAVIA_SERVICE_TYPE"
192     elif [[ "$WSGI_MODE" == "uwsgi" ]]; then
193         get_or_create_endpoint $octavia_service \
194             "$REGION_NAME" \
195             "$OCTAVIA_PROTOCOL://$SERVICE_HOST/$OCTAVIA_SERVICE_TYPE" \
196             "$OCTAVIA_PROTOCOL://$SERVICE_HOST/$OCTAVIA_SERVICE_TYPE" \
197             "$OCTAVIA_PROTOCOL://$SERVICE_HOST/$OCTAVIA_SERVICE_TYPE"
198     else
199         get_or_create_endpoint $octavia_service \
200             "$REGION_NAME" \
201             "$OCTAVIA_PROTOCOL://$SERVICE_HOST:$OCTAVIA_PORT/" \
202             "$OCTAVIA_PROTOCOL://$SERVICE_HOST:$OCTAVIA_PORT/" \
203             "$OCTAVIA_PROTOCOL://$SERVICE_HOST:$OCTAVIA_PORT/"
204     fi
205 }
  1. octavia ユーザの作成
  2. octavia サービスの作成
  3. octavia エンドポイントの作成 Configure Keystone for Octavia
    $ openstack user create --domain default --password OCTAVIA_PASS octavia 
    $ openstack role add --project service --user octavia admin
    $ openstack service create --name octavia --description "Octavia Load Balancing Service" load-balancer
    $ openstack endpoint create --region RegionOne load-balancer public http://iaas-ctrl:9876
    $ openstack endpoint create --region RegionOne load-balancer internal http://iaas-ctrl:9876
    $ openstack endpoint create --region RegionOne load-balancer admin http://iaas-ctrl:9876
oomichi commented 5 years ago

今のOpenStack IaaSはバージョン Queeens on Ubuntu 16.04 Ubuntu 16.04 向けには Octavia のパッケージが無い!? https://releases.openstack.org/teams/octavia.html#queens によると、QueensのOctaviaは version 2.0.x 2018年6月時点で Ubuntu 18.04 で OpenStack IaaSを動かそうとしたが、対応パッケージが無くアップグレードに失敗した。 https://github.com/oomichi/try-kubernetes/issues/27 今ならあるかも。

oomichi commented 5 years ago

Ubuntu 18.04によるクリーンインストールの前に pip での構築を試してみる。 参考:

oomichi commented 5 years ago

openstack network create lb-mgmt-net が失敗 https://github.com/oomichi/try-kubernetes/issues/77 で対応する。

$ openstack network create lb-mgmt-net
Error while executing command: HttpException: Unknown error, {"NeutronError": {"message": "Unable to create the network. No tenant network is available for allocation.", "type": "NoNetworkAvailable", "detail": ""}}

https://ask.openstack.org/en/question/85338/neutron-net-create-unable-to-create-the-network-no-tenant-network-is-available-for-allocation/ を参考 /var/log/neutron/neutron-server.log

2019-03-05 19:04:22.055 2332 INFO neutron.wsgi [-] 127.0.0.1 "GET / HTTP/1.1" status: 200  len: 251 time: 0.0005620
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation [req-0c891103-2cdc-4785-912a-797dbe9df1e7 e5e99065fd524f328c2f81e28a6fbc42 682e74f275fe427abd9eb6759f3b68c5 - default default] POST failed.: NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation Traceback (most recent call last):
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/pecan/core.py", line 683, in __call__
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     self.invoke_controller(controller, args, kwargs, state)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/pecan/core.py", line 574, in invoke_controller
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     result = controller(*args, **kwargs)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 91, in wrapped
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     setattr(e, '_RETRY_EXCEEDED', True)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     self.force_reraise()
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     six.reraise(self.type_, self.value, self.tb)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 87, in wrapped
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return f(*args, **kwargs)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 147, in wrapper
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     ectxt.value = e.inner_exc
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     self.force_reraise()
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     six.reraise(self.type_, self.value, self.tb)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 135, in wrapper
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return f(*args, **kwargs)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 126, in wrapped
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     LOG.debug("Retry wrapper got retriable exception: %s", e)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     self.force_reraise()
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     six.reraise(self.type_, self.value, self.tb)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 122, in wrapped
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return f(*dup_args, **dup_kwargs)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/pecan_wsgi/controllers/utils.py", line 76, in wrapped
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return f(*args, **kwargs)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/pecan_wsgi/controllers/resource.py", line 159, in post
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return self.create(resources)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/pecan_wsgi/controllers/resource.py", line 177, in create
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return {key: creator(*creator_args, **creator_kwargs)}
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 627, in inner
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return f(self, context, *args, **kwargs)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 161, in wrapped
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return method(*args, **kwargs)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 91, in wrapped
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     setattr(e, '_RETRY_EXCEEDED', True)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     self.force_reraise()
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     six.reraise(self.type_, self.value, self.tb)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 87, in wrapped
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return f(*args, **kwargs)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 147, in wrapper
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     ectxt.value = e.inner_exc
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     self.force_reraise()
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     six.reraise(self.type_, self.value, self.tb)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 135, in wrapper
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return f(*args, **kwargs)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 126, in wrapped
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     LOG.debug("Retry wrapper got retriable exception: %s", e)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     self.force_reraise()
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     six.reraise(self.type_, self.value, self.tb)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/db/api.py", line 122, in wrapped
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     return f(*dup_args, **dup_kwargs)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 837, in create_network
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     result, mech_context = self._create_network_db(context, network)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 796, in _create_network_db
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     tenant_id)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 209, in create_network_segments
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     segment = self._allocate_tenant_net_segment(context)
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 272, in _allocate_tenant_net_segment
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation     raise exc.NoNetworkAvailable()
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation NoNetworkAvailable: Unable to create the network. No tenant network is available for allocation.
2019-03-05 19:04:22.508 2332 ERROR neutron.pecan_wsgi.hooks.translation
2019-03-05 19:04:22.809 2332 INFO neutron.wsgi [req-0c891103-2cdc-4785-912a-797dbe9df1e7 e5e99065fd524f328c2f81e28a6fbc42 682e74f275fe427abd9eb6759f3b68c5 - default default] 127.0.0.1 "POST /v2.0/networks HTTP/1.1" status: 503  len: 369 time: 0.7519429

ml2_conf.ini の見直し必須

oomichi commented 5 years ago

devstackが設定している項目一覧

--- DONE list --------------------------------------------------
api_settings api_handler queue_producer  <<これはデフォルトなので設定不要
api_settings bind_host 0.0.0.0  <<デフォルトは127.0.0.1 なので念のため、これを設定
api_settings bind_port ${OCTAVIA_HA_PORT}
controller_worker amp_active_retries 100  <<デフォルト30、一先ずデフォルトで
controller_worker amp_active_wait_sec 2  <<デフォルト10、一先ずデフォルトで
controller_worker amp_boot_network_list ${OCTAVIA_AMP_NETWORK_ID}  <<lb-mgmt-net ネットワークのIDを設定
controller_worker amp_flavor_id $amp_flavor_id  <<AMP用Flavorを設定
controller_worker amphora_driver ${OCTAVIA_AMPHORA_DRIVER}  <<amphora_haproxy_rest_driver を設定
controller_worker amp_image_owner_id ${owner_id}  <<無くても動くのでまずはデフォルトで
controller_worker amp_image_tag ${OCTAVIA_AMP_IMAGE_TAG}  <<amphoraを指定
controller_worker amp_secgroup_list ${OCTAVIA_MGMT_SEC_GRP_ID}  <<作成したSecgroupを指定
controller_worker amp_ssh_key_name ${OCTAVIA_AMP_SSH_KEY_NAME}  <<作成したKeypairを指定
controller_worker compute_driver ${OCTAVIA_COMPUTE_DRIVER}  <<compute_nova_driver を指定
controller_worker loadbalancer_topology ${OCTAVIA_LB_TOPOLOGY}  <<デフォルトSINGLEのまま
controller_worker network_driver ${OCTAVIA_NETWORK_DRIVER}  <<allowed_address_pairs_driverを指定
controller_worker workers 2   <<デフォルト1のまま
database connection "mysql+pymysql://${DATABASE_USER}:${DATABASE_PASSWORD}@${DATABASE_HOST}:3306/octavia" << connection = mysql+pymysql://octavia:OCTAVIA_DBPASS@iaas-ctrl/octavia
DEFAULT debug $ENABLE_DEBUG_LOG_LEVEL  <<Trueを設定(DevStack本体が設定
DEFAULT transport_url $(get_transport_url)  <<rabbit://openstack:RABBIT_PASS@iaas-ctrl を設定
haproxy_amphora client_cert ${OCTAVIA_CERTS_DIR}/client.pem  <<デフォルト値 /etc/octavia/certs/client.pem を利用
haproxy_amphora connection_max_retries 1500  <<デフォルト値120を利用
haproxy_amphora connection_retry_interval 1  <<デフォルト値5を利用
haproxy_amphora rest_request_conn_timeout ${OCTAVIA_AMP_CONN_TIMEOUT}   <<デフォルト値10と同じ
haproxy_amphora rest_request_read_timeout ${OCTAVIA_AMP_READ_TIMEOUT}  <<デフォルト60に対して120を設定、デフォルトを利用
oslo_messaging rpc_thread_pool_size 2  <<デフォルト60、デフォルト利用
oslo_messaging topic octavia_prov   <<設定する必要あり
haproxy_amphora server_ca ${OCTAVIA_CERTS_DIR}/ca_01.pem  <</etc/octavia/certs/ca_01.pem を設定
health_manager bind_ip $MGMT_PORT_IP   <<0.0.0.0 に変更
health_manager bind_port $OCTAVIA_HM_LISTEN_PORT  <<デフォルト5555と同じ、デフォルトを利用
health_manager controller_ip_port_list $MGMT_PORT_IP:$OCTAVIA_HM_LISTEN_PORT   <<iaas-ctrl:5555 を設定
health_manager heartbeat_key ${OCTAVIA_HEALTH_KEY}  <<デフォルトの無しを試してみる
house_keeping amphora_expiry_age ${OCTAVIA_AMP_EXPIRY_AGE}  <<デフォルト 604800 を利用
house_keeping load_balancer_expiry_age ${OCTAVIA_LB_EXPIRY_AGE} <<デフォルト 604800 を利用
service_auth auth_type password
service_auth auth_url $OS_AUTH_URL                <<http://iaas-ctrl:5000
service_auth cafile $SSL_BUNDLE_FILE                <<不要?
keystone_authtoken auth_url http://iaas-ctrl:5000
keystone_authtoken memcached_servers iaas-ctrl:11211
keystone_authtoken auth_type password
keystone_authtoken project_domain_name default
keystone_authtoken user_domain_name default
keystone_authtoken project_name service
keystone_authtoken username octavia
keystone_authtoken password OCTAVIA_PASS

--- NOT DONE list --------------------------------------------------
certificates ca_certificate ${OCTAVIA_CERTS_DIR}/ca_01.pem
certificates ca_private_key ${OCTAVIA_CERTS_DIR}/private/cakey.pem
certificates ca_private_key_passphrase foobar
certificates server_certs_key_passphrase insecure-key-do-not-use-this-key

保留 多くのプロジェクトが keystone_autotokenのみ設定。service_auth は設定していない。 DevStackの場合、Octavia向けには両方設定している。service_authって本当に必要? neutron-lbaas では参照されていたが、Octaviaでは使われていないように見える。 例: http://git.openstack.org/cgit/openstack/neutron-lbaas/tree/neutron_lbaas/common/keystone.py#n180 octavia/certificates/common/auth/barbican_acl.py で使われている。 今回は Barbican 連携しないので不要。

service_auth memcached_servers $SERVICE_HOST:11211                             <<iaas-ctrl:11211
service_auth password $OCTAVIA_PASSWORD                                             <<OCTAVIA_PASS
service_auth project_domain_name $OCTAVIA_PROJECT_DOMAIN_NAME  <<default
service_auth project_name $OCTAVIA_PROJECT_NAME                                <<service
service_auth user_domain_name $OCTAVIA_USER_DOMAIN_NAME             <<default
service_auth username $OCTAVIA_USERNAME                                               <<octavia
oomichi commented 5 years ago
  1. の Neutron設定を行った後で neutron-lbaasv2-agent サービスを再起動したところ、以下のように起動に失敗した。
    2019-03-11 16:04:26.055 25716 INFO neutron.common.config [-] Logging enabled!
    2019-03-11 16:04:26.055 25716 INFO neutron.common.config [-] /usr/bin/neutron-lbaasv2-agent version 12.0.2
    2019-03-11 16:04:26.055 25716 WARNING neutron_lbaas.agent.agent [-] neutron-lbaas is now deprecated. See: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation
    2019-03-11 16:04:26.056 25716 WARNING stevedore.named [req-634d5fa9-782c-408c-a330-ca01d8ea6a42 - - - - -] Could not load neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

    そもそも /etc/neutron/neutron_lbaas.conf で指定していない HaproxyNSDriver のロードで失敗しているってどういうことだ。。 https://serverascode.com/2017/08/11/install-openstack-octavia-loadbalancer.html によると

    Ensure neutron-lbaasv2-agent is Stopped and Disabled
    It’s not used with Octavia and must not be running.

    となっているので、動いていないのが正しいらしい。

oomichi commented 5 years ago

テスト openstack loadbalancer create を実行するため、openstack コマンドの LB 用プラグイン python-octaviaclient をインストール

$ sudo apt-get install python-octaviaclient

LB 作成を実行

$ openstack loadbalancer create --name lb1 --vip-subnet-id provider
An auth plugin is required to determine endpoint URL (HTTP 500) (Request-ID: req-2f197b33-bc13-4c6b-9d50-6bae262c1e90)
oomichi commented 5 years ago
2019-03-11 16:48:23.480 25014 ERROR wsme.api [req-842713ee-7bd7-432f-9fe0-a0c396f3201e - 682e74f275fe427abd9eb6759f3b68c5 - default default] Server-side error: "An auth plugin is required to determine endpoint URL". Detail:
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/wsmeext/pecan.py", line 84, in callfunction
    result = f(self, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/octavia/api/v2/controllers/load_balancer.py", line 231, in post
    self._validate_vip_request_object(load_balancer)
  File "/usr/local/lib/python2.7/dist-packages/octavia/api/v2/controllers/load_balancer.py", line 194, in _validate_vip_request_object
    subnet_id=load_balancer.vip_subnet_id)
  File "/usr/local/lib/python2.7/dist-packages/octavia/common/validate.py", line 240, in subnet_exists
    network_driver = utils.get_network_driver()
  File "/usr/local/lib/python2.7/dist-packages/octavia/common/utils.py", line 51, in get_network_driver
    invoke_on_load=True
  File "/usr/lib/python2.7/dist-packages/stevedore/driver.py", line 61, in __init__
    warn_on_missing_entrypoint=warn_on_missing_entrypoint
  File "/usr/lib/python2.7/dist-packages/stevedore/named.py", line 81, in __init__
    verify_requirements)
  File "/usr/lib/python2.7/dist-packages/stevedore/extension.py", line 203, in _load_plugins
    self._on_load_failure_callback(self, ep, err)
  File "/usr/lib/python2.7/dist-packages/stevedore/extension.py", line 195, in _load_plugins
    verify_requirements,
  File "/usr/lib/python2.7/dist-packages/stevedore/named.py", line 158, in _load_one_plugin
    verify_requirements,
  File "/usr/lib/python2.7/dist-packages/stevedore/extension.py", line 227, in _load_one_plugin
    obj = plugin(*invoke_args, **invoke_kwds)
  File "/usr/local/lib/python2.7/dist-packages/octavia/network/drivers/neutron/allowed_address_pairs.py", line 45, in __init__
    super(AllowedAddressPairsDriver, self).__init__()
  File "/usr/local/lib/python2.7/dist-packages/octavia/network/drivers/neutron/base.py", line 45, in __init__
    self.sec_grp_enabled = self._check_extension_enabled(SEC_GRP_EXT_ALIAS)
  File "/usr/local/lib/python2.7/dist-packages/octavia/network/drivers/neutron/base.py", line 59, in _check_extension_enabled
    self.neutron_client.show_extension(extension_alias)
  File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 780, in show_extension
    return self.get(self.extension_path % ext_alias, params=_params)
  File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 354, in get
    headers=headers, params=params)
  File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 331, in retry_request
    headers=headers, params=params)
  File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 282, in do_request
    headers=headers)
  File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 342, in do_request
    self._check_uri_length(url)
  File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 335, in _check_uri_length
    uri_len = len(self.endpoint_url) + len(url)
  File "/usr/lib/python2.7/dist-packages/neutronclient/client.py", line 349, in endpoint_url
    return self.get_endpoint()
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 223, in get_endpoint
    return self.session.get_endpoint(auth or self.auth, **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 940, in get_endpoint
    auth = self._auth_required(auth, 'determine endpoint URL')
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 880, in _auth_required
    raise exceptions.MissingAuthPlugin(msg_fmt % msg)

MissingAuthPlugin: An auth plugin is required to determine endpoint URL

問題が発生したコードを読むと、Neutronのエンドポイント含む情報が必要そうに見える。 Octavia のConfigで持つ必要がある!? /usr/local/lib/python2.7/dist-packages/octavia/network/drivers/neutron/base.py

 33 class BaseNeutronDriver(base.AbstractNetworkDriver):
 34
 35     def __init__(self):
 36         self.neutron_client = clients.NeutronAuth.get_neutron_client(
 37             endpoint=CONF.neutron.endpoint,
 38             region=CONF.neutron.region_name,
 39             endpoint_type=CONF.neutron.endpoint_type,
 40             service_name=CONF.neutron.service_name,
 41             insecure=CONF.neutron.insecure,
 42             ca_cert=CONF.neutron.ca_certificates_file
 43         )
 44         self._check_extension_cache = {}
 45         self.sec_grp_enabled = self._check_extension_enabled(SEC_GRP_EXT_ALIAS)
 46         self.dns_integration_enabled = self._check_extension_enabled(
 47             DNS_INT_EXT_ALIAS)

nova.conf でも以下の情報を持っていたからたぶん、正しい。 ただし、上記のOctaviaのコードを見ると CA 認証しかサポートしていないような・・

[neutron]
+ url = http://iaas-ctrl:9696
+ auth_url = http://iaas-ctrl:5000
+ auth_type = password
+ project_domain_name = default
+ user_domain_name = default
+ region_name = RegionOne
+ project_name = service
+ username = neutron
+ password = NEUTRON_PASS
+ service_metadata_proxy = true
+ metadata_proxy_shared_secret = METADATA_SECRET

ひとまず、上記設定を足してみて試してみる。

$ /usr/local/bin/octavia-api > /tmp/octavia-api.log &
$ /usr/local/bin/octavia-health-manager > /tmp/octavia-health-manager.log &
$ /usr/local/bin/octavia-housekeeping > /tmp/octavia-housekeeping.log &
$ /usr/local/bin/octavia-worker > /tmp/octavia-worker.log &
oomichi commented 5 years ago

駄目だ、動かない。 OctaviaのNeutronクライアント設定処理を確認

http://git.openstack.org/cgit/openstack/octavia/tree/octavia/common/clients.py#n101

        if not cls.neutron_client:
            kwargs = {'region_name': region,
                      'session': ksession.get_session(),
                      'endpoint_type': endpoint_type,
                      'insecure': insecure}
            if service_name:
                kwargs['service_name'] = service_name
            if endpoint:
                kwargs['endpoint_override'] = endpoint
            if ca_cert:
                kwargs['ca_cert'] = ca_cert
            try:
                cls.neutron_client = neutron_client.Client(
                    NEUTRON_VERSION, **kwargs)

で neutronclient 側 password による認証もサポートしていることがわかる

 27 def make_client(instance):
 28     """Returns an neutron client."""
 29     neutron_client = utils.get_client_class(
 30         API_NAME,
 31         instance._api_version[API_NAME],
 32         API_VERSIONS,
 33     )
 34     instance.initialize()
 35     url = instance._url
 36     url = url.rstrip("/")
 37     client = neutron_client(username=instance._username,
 38                             project_name=instance._project_name,
 39                             password=instance._password,
 40                             region_name=instance._region_name,
 41                             auth_url=instance._auth_url,
 42                             endpoint_url=url,
 43                             endpoint_type=instance._endpoint_type,
 44                             token=instance._token,
 45                             auth_strategy=instance._auth_strategy,
 46                             insecure=instance._insecure,
 47                             ca_cert=instance._ca_cert,
 48                             retries=instance._retries,
 49                             raise_errors=instance._raise_errors,
 50                             session=instance._session,
 51                             auth=instance._auth)
 52     return client
 53
 54
 55 def Client(api_version, *args, **kwargs):
 56     """Return an neutron client.
 57
 58     @param api_version: only 2.0 is supported now
 59     """
 60     neutron_client = utils.get_client_class(
 61         API_NAME,
 62         api_version,
 63         API_VERSIONS,
 64     )
 65     return neutron_client(*args, **kwargs)

次にNovaのNeutronクライアント設定処理を確認

 177 def get_client(context, admin=False):
 178     auth_plugin = _get_auth_plugin(context, admin=admin)
 179     session = _get_session()
 180     client_args = dict(session=session,
 181                        auth=auth_plugin,
 182                        global_request_id=context.global_id)
 183
 184     if CONF.neutron.url:
 185         # TODO(efried): Remove in Rocky
 186         client_args = dict(client_args,
 187                            endpoint_override=CONF.neutron.url,
 188                            # NOTE(efried): The legacy behavior was to default
 189                            # region_name in the conf.
 190                            region_name=CONF.neutron.region_name or 'RegionOne')
 191     else:
 192         # The new way
 193         # NOTE(efried): We build an adapter
 194         #               to pull conf options
 195         #               to pass to neutronclient
 196         #               which uses them to build an Adapter.
 197         # This should be unwound at some point.
 198         adap = utils.get_ksa_adapter(
 199             'network', ksa_auth=auth_plugin, ksa_session=session)
 200         client_args = dict(client_args,
 201                            service_type=adap.service_type,
 202                            service_name=adap.service_name,
 203                            interface=adap.interface,
 204                            region_name=adap.region_name,
 205                            endpoint_override=adap.endpoint_override)
 206
 207     return ClientWrapper(clientv20.Client(**client_args),
 208                          admin=admin or context.is_admin)
...
 169 def _get_session():
 170     global _SESSION
 171     if not _SESSION:
 172         _SESSION = ks_loading.load_session_from_conf_options(
 173             CONF, nova.conf.neutron.NEUTRON_GROUP)
 174     return _SESSION

それぞれを比較する。

oomichi commented 5 years ago

そもそも設定項目名が Nova に書かれているものと Octavia に書かれているもので異なる。 Nova

[neutron]
url = http://iaas-ctrl:9696
auth_url = http://iaas-ctrl:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET

Octavia

[neutron]
# The name of the neutron service in the keystone catalog
# service_name =
# Custom neutron endpoint if override is necessary
# endpoint =

# Region in Identity service catalog to use for communication with the
# OpenStack services.
# region_name =

# Endpoint type in Identity service catalog to use for communication with
# the OpenStack services.
# endpoint_type = publicURL

# CA certificates file to verify neutron connections when TLS is enabled
# insecure = False
# ca_certificates_file =
oomichi commented 5 years ago

そもそも、これらを書かなくても動けるはず? Keystoneのカタログからとってくるはずなので。 DevStackのコードを確認する。

DevStack が設定した octavia.conf

[DEFAULT]
transport_url = rabbit://stackrabbit:secretrabbit@162.242.235.132:5672/
debug = True
logging_exception_prefix = ERROR %(name)s %(instance)s
logging_default_format_string = %(color)s%(levelname)s %(name)s [-%(color)s] %(instance)s%(color)s%(message)s
logging_context_format_string = %(color)s%(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(project_name)s %(user_name)s%(color)s] %(instance)s%(color)s%(message)s
logging_debug_format_suffix = {{(pid=%(process)d) %(funcName)s %(pathname)s:%(lineno)d}}

[api_settings]
api_handler = queue_producer
bind_host = 162.242.235.132

[database]
connection = mysql+pymysql://root:secretmysql@127.0.0.1:3306/octavia

[health_manager]
bind_port = 5555
bind_ip = 192.168.0.77
controller_ip_port_list = 192.168.0.77:5555
heartbeat_key =insecure

[keystone_authtoken]
memcached_servers = localhost:11211
signing_dir =
cafile = /opt/stack/data/ca-bundle.pem
project_domain_name = Default
project_name = service
user_domain_name = Default
password = secretservice
username = octavia
auth_url = http://162.242.235.132/identity
auth_type = password

[certificates]
server_certs_key_passphrase = insecure-key-do-not-use-this-key
ca_private_key_passphrase = foobar
ca_private_key = /etc/octavia/certs/private/cakey.pem
ca_certificate = /etc/octavia/certs/ca_01.pem

[haproxy_amphora]
server_ca = /etc/octavia/certs/ca_01.pem
client_cert = /etc/octavia/certs/client.pem
base_path = /var/lib/octavia
base_cert_dir = /var/lib/octavia/certs
connection_max_retries = 1500
connection_retry_interval = 1
rest_request_conn_timeout = 10
rest_request_read_timeout = 120

[controller_worker]
amp_image_owner_id = 8a2e9308389744ea8c96b250a214bb74
amp_secgroup_list = 0909a555-7658-47f3-b2e8-5b13a819834c
amp_flavor_id = ac0f6d79-7142-4937-9e79-385ec1326b43
amp_boot_network_list = 2bea94e8-6b1e-45f5-95ae-5188c10996f2
amp_ssh_key_name = octavia_ssh_key
amp_image_tag = amphora
network_driver = allowed_address_pairs_driver
compute_driver = compute_nova_driver
amphora_driver = amphora_haproxy_rest_driver
workers = 2
amp_active_retries = 100
amp_active_wait_sec = 2
loadbalancer_topology = SINGLE

[oslo_messaging]
topic = octavia_prov
rpc_thread_pool_size = 2

[house_keeping]
load_balancer_expiry_age = 3600
amphora_expiry_age = 3600

[service_auth]
memcached_servers = 162.242.235.132:11211
cafile = /opt/stack/data/ca-bundle.pem
project_domain_name = Default
project_name = admin
user_domain_name = Default
password = secretadmin
username = admin
auth_type = password
auth_url = http://162.242.235.132/identity
oomichi commented 5 years ago

認証系だと service_auth というのがある。これは設定していない。

245     # Ensure config is set up properly for authentication as admin
246     iniset $OCTAVIA_CONF service_auth auth_url $OS_AUTH_URL
247     iniset $OCTAVIA_CONF service_auth auth_type password
248     iniset $OCTAVIA_CONF service_auth username $OCTAVIA_USERNAME
249     iniset $OCTAVIA_CONF service_auth password $OCTAVIA_PASSWORD
250     iniset $OCTAVIA_CONF service_auth user_domain_name $OCTAVIA_USER_DOMAIN_NAME
251     iniset $OCTAVIA_CONF service_auth project_name $OCTAVIA_PROJECT_NAME
252     iniset $OCTAVIA_CONF service_auth project_domain_name $OCTAVIA_PROJECT_DOMAIN_NAME
253     iniset $OCTAVIA_CONF service_auth cafile $SSL_BUNDLE_FILE
254     iniset $OCTAVIA_CONF service_auth memcached_servers $SERVICE_HOST:11211

コメントを見ると admin 権限の認証を行うためのものであり、これを設定すれば全てのサービスの認証が通るようになる? adminrcに書かれている情報をもとに下記のように設定する。

[service_auth]
memcached_servers = iaas-ctrl:11211
username = admin
password = ADMIN_PASS
project_name = admin
auth_type = password
user_domain_name = Default
project_domain_name = Default
auth_url = http://iaas-ctrl:5000/v3

出来るようになった。

$ openstack loadbalancer create --name lb1 --vip-subnet-id provider
127.0.0.1 - - [21/Mar/2019 15:12:25] "POST /v2.0/lbaas/loadbalancers HTTP/1.1" 201 573
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| created_at          | 2019-03-21T22:12:22                  |
| description         |                                      |
| flavor              |                                      |
| id                  | 15986585-5c56-43d9-80e5-2a67c0c5810d |
| listeners           |                                      |
| name                | lb1                                  |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| project_id          | 682e74f275fe427abd9eb6759f3b68c5     |
| provider            | octavia                              |
| provisioning_status | PENDING_CREATE                       |
| updated_at          | None                                 |
| vip_address         | 192.168.1.111                        |
| vip_network_id      | bfd9fd43-c9b4-43ad-bb67-930c674f2605 |
| vip_port_id         | 62589cb8-5c3f-41b0-bf0d-6884aa38fd0b |
| vip_qos_policy_id   |                                      |
| vip_subnet_id       | 43ed897b-3c10-4d5c-8f6d-263edcd817c7 |
+---------------------+--------------------------------------+
oomichi commented 5 years ago

実際には失敗している・・・

$ openstack loadbalancer list
127.0.0.1 - - [21/Mar/2019 15:14:54] "GET /v2.0/lbaas/loadbalancers HTTP/1.1" 200 611
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| id                                   | name | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| 15986585-5c56-43d9-80e5-2a67c0c5810d | lb1  | 682e74f275fe427abd9eb6759f3b68c5 | 192.168.1.111 | ERROR               | octavia  |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+

octavia-workerのログ

2019-03-21 15:12:28.055 17818 WARNING octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.lifecycle_tasks.LoadBalancerIDToE
rrorOnRevertTask' (3f13e4b1-31e5-4f46-b94f-8eb1f25f5524) transitioned into state 'REVERTED' from state 'REVERTING' with result 'None'
2019-03-21 15:12:28.066 17818 WARNING octavia.controller.worker.controller_worker [-] Flow 'octavia-create-loadbalancer-flow' (c9e9c6dd-9dad-43cd-a60e-f3b75b
cd8a04) transitioned into state 'REVERTED' from state 'RUNNING'
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server [-] Exception during message handling: CertificateGenerationException: Could not sign the certi
ficate request: Failed to load CA Private Key /etc/ssl/private/ssl-cert-snakeoil.key.
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/octavia/controller/queue/endpoint.py", line 44, in create_load_balancer
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     self.worker.create_load_balancer(load_balancer_id)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/octavia/controller/worker/controller_worker.py", line 289, in create_load_balancer
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     create_lb_tf.run()
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", line 247, in run
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     for _state in self.run_iter(timeout=timeout):
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/engine.py", line 340, in run_iter
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     failure.Failure.reraise_if_any(er_failures)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/taskflow/types/failure.py", line 336, in reraise_if_any
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     failures[0].reraise()
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/taskflow/types/failure.py", line 343, in reraise
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     six.reraise(*self._exc_info)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     result = task.execute(**arguments)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/octavia/controller/worker/tasks/cert_task.py", line 46, in execute
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     validity=CERT_VALIDITY)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     cert = cls.sign_cert(csr, validity, **kwargs)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/octavia/certificates/generator/local.py", line 91, in sign_cert
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     cls._validate_cert(ca_cert, ca_key, ca_key_pass)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/octavia/certificates/generator/local.py", line 62, in _validate_cert
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server     .format(CONF.certificates.ca_private_key)
2019-03-21 15:12:28.066 17818 ERROR oslo_messaging.rpc.server CertificateGenerationException: Could not sign the certificate request: Failed to load CA Private Key /etc/ssl/private/ssl-cert-snakeoil.key.
oomichi commented 5 years ago

/etc/ssl/private/ssl-cert-snakeoil.key は下記で設定されているデフォルト値

 25 TLS_CERT_DEFAULT = os.environ.get(
 26     'OS_OCTAVIA_TLS_CA_CERT', '/etc/ssl/certs/ssl-cert-snakeoil.pem'
 27 )
 28 TLS_KEY_DEFAULT = os.environ.get(
 29     'OS_OCTAVIA_TLS_CA_KEY', '/etc/ssl/private/ssl-cert-snakeoil.key'
 30 )

で、これは ca_certificate オプションのデフォルト値 そこで DevStack に習って下記を設定する。

[certificates]
ca_certificate = /etc/octavia/certs/ca_01.pem
ca_private_key = /etc/octavia/certs/private/cakey.pem
oomichi commented 5 years ago

まだ駄目

2019-03-21 16:45:32.239 21194 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task
2019-03-21 16:45:32.239 21194 ERROR oslo_messaging.rpc.server     result = task.execute(**arguments)
2019-03-21 16:45:32.239 21194 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/octavia/controller/worker/tasks/cert_task.py", line 46, in execute
2019-03-21 16:45:32.239 21194 ERROR oslo_messaging.rpc.server     validity=CERT_VALIDITY)
2019-03-21 16:45:32.239 21194 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/octavia/certificates/generator/local.py", line 234, in generate_cert_key_pair
2019-03-21 16:45:32.239 21194 ERROR oslo_messaging.rpc.server     cert = cls.sign_cert(csr, validity, **kwargs)
2019-03-21 16:45:32.239 21194 ERROR oslo_messaging.rpc.server   File "/usr/local/lib/python2.7/dist-packages/octavia/certificates/generator/local.py", line 168, in sign_cert
2019-03-21 16:45:32.239 21194 ERROR oslo_messaging.rpc.server     raise exceptions.CertificateGenerationException(msg=e)
2019-03-21 16:45:32.239 21194 ERROR oslo_messaging.rpc.server CertificateGenerationException: Could not sign the certificate request: Password was not given but private key is encrypted

原因判明 octavia/bin/create_certificates.sh で Passphrase foobar 固定で作成している。

 52 echo "Create the CA's private and public keypair (2k long)"
 53 openssl genrsa -passout pass:foobar -des3 -out private/cakey.pem 2048

これをオプションで渡す必要あり。よって、certificates は下記のようになる。

[certificates]
ca_certificate = /etc/octavia/certs/ca_01.pem
ca_private_key = /etc/octavia/certs/private/cakey.pem
ca_private_key_passphrase = foobar
oomichi commented 5 years ago

VMが作られるところまでは行った

$ openstack loadbalancer create --name lb1 --vip-subnet-id provider
$ openstack loadbalancer list
127.0.0.1 - - [21/Mar/2019 16:55:09] "GET /v2.0/lbaas/loadbalancers HTTP/1.1" 200 603
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| id                                   | name | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| ed4a84fe-fe09-4b8d-a1b0-7157682f8bbb | lb1  | 682e74f275fe427abd9eb6759f3b68c5 | 192.168.1.110 | PENDING_CREATE      | octavia  |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
$ nova list
+--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------+
| ID                                   | Name                                         | Status | Task State | Power State | Networks                   |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------+
| c25bb00f-0cf6-47d4-8e69-b4a41975fd31 | amphora-71d9f6e3-edc6-49ff-988a-6b644c78e43e | ACTIVE | -          | Running     | lb-mgmt-net=192.168.10.108 |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+----------------------------+

しかし、結局エラーになった・・・

$ openstack loadbalancer list
127.0.0.1 - - [21/Mar/2019 16:55:59] "GET /v2.0/lbaas/loadbalancers HTTP/1.1" 200 611
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| id                                   | name | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| ed4a84fe-fe09-4b8d-a1b0-7157682f8bbb | lb1  | 682e74f275fe427abd9eb6759f3b68c5 | 192.168.1.110 | ERROR               | octavia  |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
oomichi commented 5 years ago

octavia-workerのログ

2019-03-21 16:55:36.420 21644 ERROR oslo_messaging.rpc.server     self).cert_verify(conn, url, verify, cert)
2019-03-21 16:55:36.420 21644 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 226, in cert_verify
2019-03-21 16:55:36.420 21644 ERROR oslo_messaging.rpc.server     "invalid path: {0}".format(cert_loc))
2019-03-21 16:55:36.420 21644 ERROR oslo_messaging.rpc.server IOError: Could not find a suitable TLS CA certificate bundle, invalid path: /etc/octavia/certs/server_ca.pem

下記の設定のデフォルト値

307     cfg.StrOpt('server_ca', default='/etc/octavia/certs/server_ca.pem',
308                help=_("The ca which signed the server certificates")),

以下のように設定

[haproxy_amphora]
server_ca = /etc/octavia/certs/ca_01.pem
oomichi commented 5 years ago

ひきつづき、LB作成で失敗する。

$ openstack loadbalancer list
127.0.0.1 - - [21/Mar/2019 17:29:03] "GET /v2.0/lbaas/loadbalancers HTTP/1.1" 200 611
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| id                                   | name | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| 48543fc7-ee30-4a32-b0dc-1dd37bc34244 | lb1  | 682e74f275fe427abd9eb6759f3b68c5 | 192.168.1.111 | ERROR               | octavia  |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+

しかも今度はOctaviaがログを吐かない・・・ だれがエラーステータスに変更しているんだ・・・

octavia-worker ログに、接続エラーのログが出ていた

2019-03-21 18:01:48.597 24609 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: ConnectTimeout: HTTPSConnectionPool(host='192.168.10.103', port=9443): Max retries exceeded with url: /0.5/plug/vip/192.168.1.107 (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f2c83109750>, 'Connection to 192.168.10.103 timed out. (connect timeout=10.0)'))
2019-03-21 18:02:03.612 24609 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: ConnectTimeout: HTTPSConnectionPool(host='192.168.10.103', port=9443): Max retries exceeded with url: /0.5/plug/vip/192.168.1.107 (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f2c830f1090>, 'Connection to 192.168.10.103 timed out. (connect timeout=10.0)')
oomichi commented 5 years ago

octavia-worker から lb-mgmt-net ネットワーク上の amphora VM にアクセスできるようにする必要あり。 Router lb-mgmt-router の 192.168.10.1 ポートと 192.168.1.109 ポートが DOWN している。

$ openstack port list
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------+--------+
| ID                                   | Name | MAC Address       | Fixed IP Addresses                                                            | Status |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------+--------+
| 2f8abc44-9f5b-45f6-a7ee-ce3221aa4347 |      | fa:16:3e:ee:6c:45 | ip_address='192.168.1.109', subnet_id='43ed897b-3c10-4d5c-8f6d-263edcd817c7'  | DOWN   |
| f2fa5c09-75d4-488c-b1b6-29a5f54da765 |      | fa:16:3e:87:f3:0b | ip_address='192.168.10.1', subnet_id='9b9f57fc-d967-4376-afd2-c581798ec1ab'   | DOWN   |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------+--------+

neutron-l3-agent が動いていなければならなかった?

$ sudo apt-get install neutron-l3-agent
oomichi commented 5 years ago

lb-mgmt-net と provider をつなぐ router を作成する。

$ openstack router create router01
$ openstack router set router01 --external-gateway provider
$ openstack router add subnet router01 lb-mgmt-subnet

そうすると port が DOWN している

$ openstack port list
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------+--------+
| ID                                   | Name | MAC Address       | Fixed IP Addresses                                                            | Status |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------+--------+
| 10ffe24e-1e49-4735-83f5-59a4fb121b46 |      | fa:16:3e:fc:7e:e3 | ip_address='192.168.1.115', subnet_id='43ed897b-3c10-4d5c-8f6d-263edcd817c7'  | DOWN   |
| a5a6a80f-cae2-468e-9945-3ae5c096670c |      | fa:16:3e:cb:83:0b | ip_address='192.168.10.1', subnet_id='9b9f57fc-d967-4376-afd2-c581798ec1ab'   | DOWN   |
| c69f8bef-900c-4753-a99a-960d44bee60b |      | fa:16:3e:cf:01:be | ip_address='192.168.1.107', subnet_id='43ed897b-3c10-4d5c-8f6d-263edcd817c7'  | ACTIVE |
| f233ccef-a53e-46d6-8479-c80b25b26887 |      | fa:16:3e:3a:57:13 | ip_address='192.168.1.100', subnet_id='43ed897b-3c10-4d5c-8f6d-263edcd817c7'  | ACTIVE |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------+--------+
oomichi commented 5 years ago

https://review.opendev.org/#/c/672842/ だと Router は作らずに、ip link している。 長くなってきたので https://github.com/oomichi/try-kubernetes/issues/91 で管理する。

      $ openstack network create lb-mgmt-net
      $ openstack subnet create --subnet-range $OCTAVIA_MGMT_SUBNET \
        --allocation-pool start=$OCTAVIA_MGMT_SUBNET_START,\
        end=$OCTAVIA_MGMT_SUBNET_END --network lb-mgmt-net lb-mgmt-subnet

      $ SUBNET_ID=$(openstack subnet show lb-mgmt-subnet -f value -c id)
      $ PORT_FIXED_IP="--fixed-ip subnet=$SUBNET_ID,ip-address=$OCTAVIA_MGMT_PORT_IP"

      $ MGMT_PORT_ID=$(openstack port create --security-group \
        lb-health-mgr-sec-grp --device-owner Octavia:health-mgr \
        --host=$(hostname) -c id -f value --network lb-mgmt-net \
        $PORT_FIXED_IP octavia-health-manager-listen-port)

      $ MGMT_PORT_MAC=$(openstack port show -c mac_address -f value \
        $MGMT_PORT_ID)
      $ MGMT_PORT_IP=$(openstack port show -f value -c fixed_ips \
        $MGMT_PORT_ID | awk '{FS=",| "; gsub(",",""); gsub("'\''",""); \
        for(i = 1; i <= NF; ++i) {if ($i ~ /^ip_address/) {n=index($i, "="); \
        if (substr($i, n+1) ~ "\\.") print substr($i, n+1)}}}')

      $ sudo ip link add o-hm0 type veth peer name o-bhm0
      $ NETID=$(openstack network show lb-mgmt-net -c id -f value)
      $ BRNAME=brq$(echo $NETID|cut -c 1-11)
      $ sudo brctl addif $BRNAME o-bhm0
      $ sudo ip link set o-bhm0 up

      $ sudo ip link set dev o-hm0 address $MGMT_PORT_MAC
      $ sudo iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT
      $ sudo dhclient -v o-hm0
oomichi commented 5 years ago

久しぶりすぎて忘れている・・・

$ sudo su - octavia
$ /usr/local/bin/octavia-api > /tmp/octavia-api.log &
$ /usr/local/bin/octavia-health-manager > /tmp/octavia-health-manager.log &
$ /usr/local/bin/octavia-housekeeping > /tmp/octavia-housekeeping.log &
$ /usr/local/bin/octavia-worker > /tmp/octavia-worker.log &

前の通り、VMが作られるところまではいった

$ openstack loadbalancer list
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| id                                   | name | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| 735a0736-78ea-45e5-bfcf-407a471bcf2d | lb1  | 682e74f275fe427abd9eb6759f3b68c5 | 192.168.0.112 | PENDING_CREATE      | octavia  |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
$ nova list
+--------------------------------------+----------------------------------------------+--------+------------+-------------+------------------------+
| ID                                   | Name                                         | Status | Task State | Power State | Networks               |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+------------------------+
| b91eb38f-5fcd-40fa-afbf-b0ccea8620bf | amphora-fda48790-2147-45e0-81ee-e2c7b54a43e6 | BUILD  | spawning   | NOSTATE     |                        |
| 7212e083-b18e-424f-b96d-54c1c773f067 | e2e                                          | ACTIVE | -          | Running     | provider=192.168.1.106 |
| 96a3e787-5a55-4f3f-818a-ba18a1faffd0 | k8s-master                                   | ACTIVE | -          | Running     | provider=192.168.1.118 |
| 0d104a77-c6ef-4ec0-8ebc-fea96cc16aea | k8s-node01                                   | ACTIVE | -          | Running     | provider=192.168.1.104 |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+------------------------+

ただし、VMとしては Active になっているが LB としては Pending_create のまま

$ nova list
+--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------+
| ID                                   | Name                                         | Status | Task State | Power State | Networks                  |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------+
| b91eb38f-5fcd-40fa-afbf-b0ccea8620bf | amphora-fda48790-2147-45e0-81ee-e2c7b54a43e6 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.105 |
| 7212e083-b18e-424f-b96d-54c1c773f067 | e2e                                          | ACTIVE | -          | Running     | provider=192.168.1.106    |
| 96a3e787-5a55-4f3f-818a-ba18a1faffd0 | k8s-master                                   | ACTIVE | -          | Running     | provider=192.168.1.118    |
| 0d104a77-c6ef-4ec0-8ebc-fea96cc16aea | k8s-node01                                   | ACTIVE | -          | Running     | provider=192.168.1.104    |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------+
$ openstack loadbalancer list
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| id                                   | name | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| 735a0736-78ea-45e5-bfcf-407a471bcf2d | lb1  | 682e74f275fe427abd9eb6759f3b68c5 | 192.168.0.112 | PENDING_CREATE      | octavia  |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+

lb-mgmt-net 上にある VM には Ping が通る状況

$ ping 192.168.0.105
PING 192.168.0.105 (192.168.0.105) 56(84) bytes of data.
64 bytes from 192.168.0.105: icmp_seq=1 ttl=64 time=0.647 ms
64 bytes from 192.168.0.105: icmp_seq=2 ttl=64 time=0.467 ms
64 bytes from 192.168.0.105: icmp_seq=3 ttl=64 time=0.749 ms
64 bytes from 192.168.0.105: icmp_seq=4 ttl=64 time=0.600 ms
oomichi commented 5 years ago

時間がかかったが Active になった。 下記のログから Instance に接続するまでに時間がかかった模様

2019-09-06 16:22:38.368 15972 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: ConnectTimeout: HTTPSConnectionPool(host='192.168.0.105', port=9443): Max retries exceeded with url: /0.5/plug/vip/192.168.0.112 (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7ff2df235350>, 'Connection to 192.168.0.105 timed out. (connect timeout=10.0)'))
2019-09-06 16:22:53.375 15972 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: ConnectTimeout: HTTPSConnectionPool(host='192.168.0.105', port=9443): Max retries exceeded with url: /0.5/plug/vip/192.168.0.112 (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7ff2df2354d0>, 'Connection to 192.168.0.105 timed out. (connect timeout=10.0)'))
2019-09-06 16:22:58.781 15972 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connected to amphora. Response: <Response [202]> request /usr/local/lib/python2.7/dist-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:280
oomichi commented 5 years ago

LBするVIPをprovider サブネット上のものに指定して、LB作成 → 想定どおり、192.168.1.0/24(providerネット) 上に作成開始 → Nova として管理用ネットワーク(lb-mgmt-net)、外部接続ネットワーク(provider)両方にインターフェースを持つ VM が作成。想定どおり

$ openstack loadbalancer create --name lb1 --vip-subnet-id 43ed897b-3c10-4d5c-8f6d-263edcd817c7
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| created_at          | 2019-09-06T23:36:05                  |
| description         |                                      |
| flavor              |                                      |
| id                  | 3598102c-f4ee-4a0d-971a-e0f30a5c3108 |
| listeners           |                                      |
| name                | lb1                                  |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| project_id          | 682e74f275fe427abd9eb6759f3b68c5     |
| provider            | octavia                              |
| provisioning_status | PENDING_CREATE                       |
| updated_at          | None                                 |
| vip_address         | 192.168.1.102                        |
| vip_network_id      | bfd9fd43-c9b4-43ad-bb67-930c674f2605 |
| vip_port_id         | b43eecf3-e174-442d-8ed5-5e0cc199c994 |
| vip_qos_policy_id   |                                      |
| vip_subnet_id       | 43ed897b-3c10-4d5c-8f6d-263edcd817c7 |
+---------------------+--------------------------------------+
$ nova list
+--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------+
| ID                                   | Name                                         | Status | Task State | Power State | Networks                                          |
+--------------------------------------+----------------------------------------------+--------+------------+-------------+---------------------------------------------------+
| 68438a58-e4e4-4c3b-9dcb-6afa26f522da | amphora-400519ec-4b39-401c-bcb0-a544589ac568 | ACTIVE | -          | Running     | lb-mgmt-net=192.168.0.111; provider=192.168.1.109 |
...
$ openstack loadbalancer list
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| id                                   | name | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| 3598102c-f4ee-4a0d-971a-e0f30a5c3108 | lb1  | 682e74f275fe427abd9eb6759f3b68c5 | 192.168.1.102 | ACTIVE              | octavia  |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
oomichi commented 5 years ago

構築としては完了したので Close する。

/close