oomichi / try-kubernetes

11 stars 5 forks source link

Octavia のLB機能を試す #93

Closed oomichi closed 4 years ago

oomichi commented 4 years ago

https://github.com/oomichi/try-kubernetes/issues/68 で構築ができたので、Octavia による LB機能を試す。

oomichi commented 4 years ago

LBインスタンスは作成できた。

$ openstack loadbalancer create --name lb1 --vip-subnet-id 43ed897b-3c10-4d5c-8f6d-263edcd817c7
$ openstack loadbalancer list
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| id                                   | name | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| 3598102c-f4ee-4a0d-971a-e0f30a5c3108 | lb1  | 682e74f275fe427abd9eb6759f3b68c5 | 192.168.1.102 | ACTIVE              | octavia  |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+

これにBackendに存在すべきVMもしくはコンテナを接続していく。 Octavia の操作方法は https://docs.openstack.org/python-octaviaclient/pike/usage/osc/v2/load-balancer.html ひとまずリストする → openstack loadbalancer コマンドではなさそう

Members として追加していくのが使い方っぽい。 さすが先輩: http://www.fraction.jp/log/archives/2016/12/16/openstack-kubernetes ちなみに上記のブログによると Kubernetes に OpenStack との LB 連携機能を有効にすると、Octavia での LB 作成、Member 作成などは LB 連携機能側(cloud-provider-openstack)でやってくれるとのこと。 ここでは念のため、Octavia 単体で動作することを確認しておく(後々の切り分けのため)

oomichi commented 4 years ago

エラー発生中

$ openstack loadbalancer pool list
Unable to establish connection to http://iaas-ctrl:9876/v2.0/lbaas/pools: HTTPConnectionPool(host='iaas-ctrl', port=9876): Max retries exceeded with url: /v2.0/lbaas/pools (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f99bb48e550>: Failed to establish a new connection: [Errno 111] Connection refused',))

下記のエラーでOctavia-apiが落ちた模様。

2019-09-06 16:36:06.385 15938 DEBUG octavia.network.drivers.neutron.base [req-cdca6e0b-88cd-4bef-b399-3ca0f22e350e - 682e74f275fe427abd9eb6759f3b68c5 - default default] Neutron extension dns-integration is not enabled _check_extension_enabled /usr/local/lib/python2.7/dist-packages/octavia/network/drivers/neutron/base.py:68
2019-09-06 16:36:06.505 15938 DEBUG octavia.network.drivers.neutron.base [req-cdca6e0b-88cd-4bef-b399-3ca0f22e350e - 682e74f275fe427abd9eb6759f3b68c5 - default default] Neutron extension allowed-address-pairs found enabled _check_extension_enabled /usr/local/lib/python2.7/dist-packages/octavia/network/drivers/neutron/base.py:64
2019-09-06 16:36:07.734 15938 INFO octavia.api.v2.controllers.load_balancer [req-cdca6e0b-88cd-4bef-b399-3ca0f22e350e - 682e74f275fe427abd9eb6759f3b68c5 - default default] Sending created Load Balancer 3598102c-f4ee-4a0d-971a-e0f30a5c3108 to the handler
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 59584)
2019-09-09 10:57:12.040 15938 CRITICAL octavia [req-d218c695-b297-43ed-9824-e332fff1bd0a - 682e74f275fe427abd9eb6759f3b68c5 - default default] Unhandled error: IOError: [Errno 5] Input/output error
2019-09-09 10:57:12.040 15938 ERROR octavia Traceback (most recent call last):
2019-09-09 10:57:12.040 15938 ERROR octavia   File "/usr/local/bin/octavia-api", line 10, in <module>
2019-09-09 10:57:12.040 15938 ERROR octavia     sys.exit(main())
2019-09-09 10:57:12.040 15938 ERROR octavia   File "/usr/local/lib/python2.7/dist-packages/octavia/cmd/api.py", line 40, in main
2019-09-09 10:57:12.040 15938 ERROR octavia     srv.serve_forever()
2019-09-09 10:57:12.040 15938 ERROR octavia   File "/usr/lib/python2.7/SocketServer.py", line 233, in serve_forever
2019-09-09 10:57:12.040 15938 ERROR octavia     self._handle_request_noblock()
2019-09-09 10:57:12.040 15938 ERROR octavia   File "/usr/lib/python2.7/SocketServer.py", line 292, in _handle_request_noblock
2019-09-09 10:57:12.040 15938 ERROR octavia     self.handle_error(request, client_address)
2019-09-09 10:57:12.040 15938 ERROR octavia   File "/usr/lib/python2.7/SocketServer.py", line 351, in handle_error
2019-09-09 10:57:12.040 15938 ERROR octavia     traceback.print_exc() # XXX But this goes to stderr!
2019-09-09 10:57:12.040 15938 ERROR octavia   File "/usr/lib/python2.7/traceback.py", line 233, in print_exc
2019-09-09 10:57:12.040 15938 ERROR octavia     print_exception(etype, value, tb, limit, file)
2019-09-09 10:57:12.040 15938 ERROR octavia   File "/usr/lib/python2.7/traceback.py", line 124, in print_exception
2019-09-09 10:57:12.040 15938 ERROR octavia     _print(file, 'Traceback (most recent call last):')
2019-09-09 10:57:12.040 15938 ERROR octavia   File "/usr/lib/python2.7/traceback.py", line 13, in _print
2019-09-09 10:57:12.040 15938 ERROR octavia     file.write(str+terminator)
2019-09-09 10:57:12.040 15938 ERROR octavia IOError: [Errno 5] Input/output error
2019-09-09 10:57:12.040 15938 ERROR octavia

ログを見ても例外発生箇所がわからないため、ひとまず APIプロセスを再起動して対処

oomichi commented 4 years ago

作成済みの LB の詳細を確認したところ、pools は空の状態 -> つまり cloud-provider-openstack を使わない場合(通常のOctaviaの利用)は、Poolの作成、Member登録をやっていく必要がありそう。

$ openstack loadbalancer show 3598102c-f4ee-4a0d-971a-e0f30a5c3108
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| created_at          | 2019-09-06T23:36:05                  |
| description         |                                      |
| flavor              |                                      |
| id                  | 3598102c-f4ee-4a0d-971a-e0f30a5c3108 |
| listeners           |                                      |
| name                | lb1                                  |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| project_id          | 682e74f275fe427abd9eb6759f3b68c5     |
| provider            | octavia                              |
| provisioning_status | ACTIVE                               |
| updated_at          | 2019-09-06T23:38:13                  |
| vip_address         | 192.168.1.102                        |
| vip_network_id      | bfd9fd43-c9b4-43ad-bb67-930c674f2605 |
| vip_port_id         | b43eecf3-e174-442d-8ed5-5e0cc199c994 |
| vip_qos_policy_id   |                                      |
| vip_subnet_id       | 43ed897b-3c10-4d5c-8f6d-263edcd817c7 |
+---------------------+--------------------------------------+
oomichi commented 4 years ago

cpu02, cpu03 への VMスケジューリング失敗 nova-conductor のログ

2019-09-09 16:12:29.470 3250 ERROR nova.scheduler.utils [req-16637ca4-a94a-412a-9e88-6053392020f5 e5e99065fd524f328c2f81e28a6fbc42 682e74f275fe427abd9eb6759f3b68c5 - default default] [instance: 602277ba-9dd7-43db-b348-5539374cd655] Error from last host: iaas-cpu02 (node iaas-cpu02): [
u'Traceback (most recent call last):\n
', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1840, in _do_build_and_run_instance\n
    filter_properties, request_spec)\n
', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2117, in _build_and_run_instance\n
    instance_uuid=instance.uuid, reason=six.text_type(e))\n
', u'RescheduledException: Build of instance 602277ba-9dd7-43db-b348-5539374cd655 was re-scheduled: Binding failed for port 47d00d9e-1b92-41a0-98f9-6639fda17538, please check neutron logs for more information.\n']

Neutron のログを見ろと書いてあるので、該当ログを参照 neutron-server.log

2019-09-09 16:12:26.533 3352 ERROR neutron.plugins.ml2.managers [req-a64456b7-8713-43f0-b201-3d061f42f984 f4cbbc267cc641b7ada951cc0e68b427 aa401f37ccab4190b7f5448189a7344b - default default]
 Failed to bind port 47d00d9e-1b92-41a0-98f9-6639fda17538 on host iaas-cpu02 for vnic_type normal using segments [{'network_id': 'bfd9fd43-c9b4-43ad-bb67-930c674f2605', 'segmentation_id': None, 'physical_network': u'provider', 'id': 'd2b5e2a8-fd30-4589-a5da-a34f446ae84a', 'network_type': u'flat'}]

iaas-cpu02 のログ → local_ip の設定に誤りがあった?

2019-05-02 14:57:11.752 1012 ERROR neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Tunneling cannot be enabled without the local_ip bound to an interface on the host. Please configure local_ip 192.168.1.61 on the host interface to be used for tunneling and restart the agent.

設定上は正しいように見える

$ cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eno1

[vxlan]
enable_vxlan = true
local_ip = 192.168.1.61
l2_population = true
vxlan_group =

[agent]
prevent_arp_spoofing = true

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

cpu02, cpu03 ともに再起動してみる。 → 駄目だ。引き続き Linux bridge agent が XXX の状態

$ openstack network agent list
+--------------------------------------+----------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type           | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+----------------------+------------+-------------------+-------+-------+---------------------------+
| 1a7faecf-bd6b-44a7-b456-9d56506dcbf8 | Metadata agent       | iaas-ctrl  | None              | :-)   | UP    | neutron-metadata-agent    |
| 2cb40e67-c41c-4172-b742-699dc85451fb | Linux bridge agent   | iaas-cpu02 | None              | XXX   | UP    | neutron-linuxbridge-agent |
| 2ff0a087-636f-413d-9394-d015a5a4f032 | Linux bridge agent   | iaas-cpu03 | None              | XXX   | UP    | neutron-linuxbridge-agent |
| 3c658599-86f3-4fc1-bc2e-0f06cc14d29e | DHCP agent           | iaas-ctrl  | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 3c66d18c-5670-42ab-9fa7-4c4582469b0b | Linux bridge agent   | iaas-ctrl  | None              | :-)   | UP    | neutron-linuxbridge-agent |

https://ask.openstack.org/en/question/103199/neutron-linux-bridge-cleanup-fails-on-host-startup/ によると DHCP ではなく Static IPにすべきとある。 ただし、iaas-cpu01 も DHCP配布されているけど動作しているのは何故? iaas-cpu02/03 上で $ sudo service neutron-linuxbridge-agent restart を実行したところ動作した。

$ openstack network agent list
+--------------------------------------+----------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type           | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+----------------------+------------+-------------------+-------+-------+---------------------------+
| 1a7faecf-bd6b-44a7-b456-9d56506dcbf8 | Metadata agent       | iaas-ctrl  | None              | :-)   | UP    | neutron-metadata-agent    |
| 2cb40e67-c41c-4172-b742-699dc85451fb | Linux bridge agent   | iaas-cpu02 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 2ff0a087-636f-413d-9394-d015a5a4f032 | Linux bridge agent   | iaas-cpu03 | None              | :-)   | UP    | neutron-linuxbridge-agent |

vm が iaas-cpu02 でも動作するようになった

$ nova show backend01
+--------------------------------------+------------------------------------------------------------+
| Property                             | Value                                                      |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                       |
| OS-EXT-SRV-ATTR:host                 | iaas-cpu02                                                 |
oomichi commented 4 years ago

backend サーバに nginx を設定

$ sudo apt-get update
$ sudo apt-get install nginx
$ sudo service nginx start
$ sudo netstat -anp | grep nginx
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      12892/nginx: master
tcp6       0      0 :::80                   :::*                    LISTEN      12892/nginx: master
unix  3      [ ]         STREAM     CONNECTED     34290    12892/nginx: master
unix  3      [ ]         STREAM     CONNECTED     34291    12892/nginx: master
$
$ curl http://localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
oomichi commented 4 years ago

Octavia のドキュメントを読むと、一般的に以下の手順で標準的な HTTP LB を作るみたい。

source/user/guides/basic-cookbook.rst

  1. Create a LB on public subnet
  2. Create a listener
  3. Create pool for the above listener
  4. Add members of private network to the pool

backend01(192.168.1.117) を Member として追加する。

$ openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1
$ openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
$ openstack loadbalancer member create --subnet-id provider --address 192.168.1.117 --protocol-port 80 pool1
$ openstack loadbalancer member list pool1
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
| id                                   | name | project_id                       | provisioning_status | address       | protocol_port | operating_status | weight |
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
| 866fd85a-3574-46f6-b667-a1487f28e1d3 |      | 682e74f275fe427abd9eb6759f3b68c5 | ACTIVE              | 192.168.1.117 |            80 | NO_MONITOR       |      1 |
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
oomichi commented 4 years ago

iaas-ctrl から LB に対し Curl が通らない。 Octavia、backend01 の Security Group を確認する。 -> TCP/443 (https) しか通らないようになっていた。専用の Security Group を作って試してみる。

$ openstack security group create allow-http-https-icmp
$ openstack security group rule create allow-http-https-icmp --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
$ openstack security group rule create allow-http-https-icmp --protocol icmp
$ openstack security group rule create allow-http-https-icmp --protocol tcp --dst-port 80:80 --remote-ip 0.0.0.0/0
$ openstack security group rule create allow-http-https-icmp --protocol tcp --dst-port 443:443 --remote-ip 0.0.0.0/0

作成した security group を指定して VM 作成 → iaas-ctrl からは curl できるようになった。

$ nova boot --key-name mykey --flavor m1.small --image fc29755b-4468-4951-b7e3-0278b0fb3682 --nic net-name=provider --security-groups allow-http-https-icmp backend01

Member追加から行う

$ openstack loadbalancer member create --subnet-id provider --address 192.168.1.101 --protocol-port 80 pool1
$ openstack loadbalancer member list pool1
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
| id                                   | name | project_id                       | provisioning_status | address       | protocol_port | operating_status | weight |
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+
| 3a22ae37-3917-4957-b544-6c427acf214a |      | 682e74f275fe427abd9eb6759f3b68c5 | ACTIVE              | 192.168.1.101 |            80 | NO_MONITOR       |      1 |
+--------------------------------------+------+----------------------------------+---------------------+---------------+---------------+------------------+--------+

LB のアドレスを確認、curl で確認 → やっと動作確認できた!!

$ openstack loadbalancer list
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| id                                   | name | project_id                       | vip_address   | provisioning_status | provider |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
| 3598102c-f4ee-4a0d-971a-e0f30a5c3108 | lb1  | 682e74f275fe427abd9eb6759f3b68c5 | 192.168.1.102 | ACTIVE              | octavia  |
+--------------------------------------+------+----------------------------------+---------------+---------------------+----------+
$ curl http://192.168.1.102
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>