Closed oomichi closed 2 years ago
https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-octavia-ingress-controller.md の手順を実施しているが、https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-octavia-ingress-controller.md#create-an-ingress-resource のタイミングで Octavia インスタンスができていないことからすると、連携が出来ていないように見える。 そもそも octavia-ingress-controller の Pod が正常に立上がっていない。
octavia-ingress-controller-0 0/1 CrashLoopBackOff 12 37m
logs から iaas-ctrl の名前解決が出来ていないことがわかる
$ kubectl logs octavia-ingress-controller-0 -n kube-system
time="2019-09-11T02:39:41Z" level=info msg="Using config file" file=/etc/config/octavia-ingress-controller-config.yaml
W0911 02:39:41.179058 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
time="2019-09-11T02:39:41Z" level=fatal msg="failed to initialize openstack client" error="Post http://iaas-ctrl:5000/v3/auth/tokens: dial tcp: lookup iaas-ctrl on 8.8.4.4:53: no such host"
configmap の auth-url を IP指定に変更
- auth-url: http://iaas-ctrl:5000/v3
+ auth-url: http://192.168.1.1:5000/v3
反映、実行
$ kubectl apply -f configmap.yaml
configmap/octavia-ingress-controller-config configured
$ kubectl apply -f deployment.yaml
statefulset.apps/octavia-ingress-controller created
別の原因で立上がらない
$ kubectl logs -n kube-system octavia-ingress-controller-0
time="2019-09-11T02:45:27Z" level=info msg="Using config file" file=/etc/config/octavia-ingress-controller-config.yaml
W0911 02:45:27.419657 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
E0911 02:45:27.769762 1 runtime.go:69] Observed a panic: &runtime.TypeAssertionError{_interface:(*runtime._type)(0x13a7360), concrete:(*runtime._type)(0x154c0a0), asserted:(*runtime._type)(0x154d720), missingMethod:""} (interface conversion: interface {} is *v1beta1.Ingress, not *v1beta1.Ingress (types from different packages))
goroutine 53 [running]:
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x13deec0, 0xc00025e9f0)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65 +0x7b
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:47 +0x82
panic(0x13deec0, 0xc00025e9f0)
/usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/cloud-provider-openstack/pkg/ingress/controller.NewController.func1(0x154c0a0, 0xc000276000)
/home/zuul/src/k8s.io/cloud-provider-openstack/pkg/ingress/controller/controller.go:242 +0x3bc
k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache/controller.go:196
k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1.1(0xc0002f6a80, 0xc000189180, 0x40b8a2)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache/shared_informer.go:642 +0x26d
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0, 0xc0004bae38, 0x42ac1f, 0xc0001891b0)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:284 +0x51
k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache/shared_informer.go:636 +0x79
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00005df68)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0004baf68, 0xdf8475800, 0x0, 0x175e701, 0xc0003a26c0)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc00040e300)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache/shared_informer.go:634 +0x9c
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000388840, 0xc000466c70)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x4f
created by k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x62
panic: interface conversion: interface {} is *v1beta1.Ingress, not *v1beta1.Ingress (types from different packages) [recovered]
panic: interface conversion: interface {} is *v1beta1.Ingress, not *v1beta1.Ingress (types from different packages)
goroutine 53 [running]:
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:54 +0x105
panic(0x13deec0, 0xc00025e9f0)
/usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/cloud-provider-openstack/pkg/ingress/controller.NewController.func1(0x154c0a0, 0xc000276000)
/home/zuul/src/k8s.io/cloud-provider-openstack/pkg/ingress/controller/controller.go:242 +0x3bc
k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache/controller.go:196
k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1.1(0xc0002f6a80, 0xc000189180, 0x40b8a2)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache/shared_informer.go:642 +0x26d
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0, 0xc00019be38, 0x42ac1f, 0xc0001891b0)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:284 +0x51
k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache/shared_informer.go:636 +0x79
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00005df68)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00019bf68, 0xdf8475800, 0x0, 0x175e701, 0xc0003a26c0)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc00040e300)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/client-go/tools/cache/shared_informer.go:634 +0x9c
k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000388840, 0xc000466c70)
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x4f
created by k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/home/zuul/src/k8s.io/cloud-provider-openstack/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x62
cloud-provider-openstack/issues/754 として Issue 登録されている。
panic: interface conversion: interface {} is *v1beta1.Ingress, not *v1beta1.Ingress (types from different packages)
からすると2つのパッケージ間で v1beta1.Ingress の定義が異なっている模様。
v1beta1.Ingress 定義の大元をまずは調べる。
k8s.io/api/networking/v1beta1/types.go
の Ingress と思われる。
最新のコード
28 // Ingress is a collection of rules that allow inbound connections to reach the
29 // endpoints defined by a backend. An Ingress can be configured to give services
30 // externally-reachable urls, load balance traffic, terminate SSL, offer name
31 // based virtual hosting etc.
32 type Ingress struct {
33 metav1.TypeMeta `json:",inline"`
34 // Standard object's metadata.
35 // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
36 // +optional
37 metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
38
39 // Spec is the desired state of the Ingress.
40 // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
41 // +optional
42 Spec IngressSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
43
44 // Status is the current state of the Ingress.
45 // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
46 // +optional
47 Status IngressStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
48 }
このファイル2019年の2月に追加されてから変更があったのは8月末の下記のコメント変更くらい。
commit 3ba189080b94f7c2179db28484fa92584dd7a596
Author: misakazhou <misakazhou@tencent.com>
Date: Thu Aug 29 08:35:16 2019 +0800
Fix broken link to api-conventions doc.
Signed-off-by: misakazhou <misakazhou@tencent.com>
Kubernetes-commit: f0323a2030c7adae0e0965a7d3b455dd416472a0
diff --git a/networking/v1beta1/types.go b/networking/v1beta1/types.go
index 63bf2d52..37277bf8 100644
--- a/networking/v1beta1/types.go
+++ b/networking/v1beta1/types.go
@@ -32,17 +32,17 @@ import (
type Ingress struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Spec is the desired state of the Ingress.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec IngressSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// Status is the current state of the Ingress.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status IngressStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -53,7 +53,7 @@ type Ingress struct {
type IngressList struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
コメントが換わっただけでもTypeが不一致となるのか!?
container image を latest から v1.15.0 に変更すればいけるらしい。 -> 確かに動いた
$ kubectl -n kube-system get statefulset
NAME READY AGE
octavia-ingress-controller 1/1 24s
引き続き、下記のエラーが発生する。 → たぶんこれは Neurtron のエンドポイントを Keystone から取ってきていて、それがホスト名ベースで登録されているから。ちゃんと octavia-ingress-controller 側で名前解決できるようにしておかないと駄目っぽい。
$ kubectl -n kube-system logs octavia-ingress-controller-0
time="2019-09-11T22:29:42Z" level=info msg="Using config file" file=/etc/config/octavia-ingress-controller-config.yaml
W0911 22:29:42.024586 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0911 22:29:42.377592 1 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"test-octavia-ingress", UID:"4f9862b5-c5db-458c-bb00-7aeef28238ae", APIVersion:"extensions/v1beta1", ResourceVersion:"5767913", FieldPath:""}): type: 'Normal' reason: 'Creating' Ingress default/test-octavia-ingress
time="2019-09-11T22:29:42Z" level=info msg="ingress controller synced and ready"
time="2019-09-11T22:29:42Z" level=error msg="Failed to retrieve the subnet 43ed897b-3c10-4d5c-8f6d-263edcd817c7: Get http://iaas-ctrl:9696/v2.0/subnets/43ed897b-3c10-4d5c-8f6d-263edcd817c7: dial tcp: lookup iaas-ctrl on 8.8.4.4:53: no such host"
hostAliases で名前解決を入れる。
- effect: NoExecute
operator: Exists
+ hostAliases:
+ - ip: "192.168.1.1"
+ hostnames:
+ - "iaas-ctrl"
containers:
- name: octavia-ingress-controller
image: docker.io/k8scloudprovider/octavia-ingress-controller:v1.15.0
名前解決はできるようになったが、connection refused で通信できないというエラーが発生。
test-octavia-ingress: Get http://iaas-ctrl:9876/v2.0/lbaas/loadbalancers?name=kube_ingress_k8s-20190910_default_test-octavia-ingress: dial tcp 192.168.1.1:9876: connect: connection refused
I0911 22:53:45.567816 1 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"test-octavia-ingress", UID:"1869006e-0049-42f3-8ec5-fd744056e102", APIVersion:"extensions/v1beta1", ResourceVersion:"5918470", FieldPath:""}): type: 'Warning' reason: 'Failed' Failed to create openstack resources for ingress default/test-octavia-ingress: error getting loadbalancer kube_ingress_k8s-20190910_default_test-octavia-ingress: Get http://iaas-ctrl:9876/v2.0/lbaas/loadbalancers?name=kube_ingress_k8s-20190910_default_test-octavia-ingress: dial tcp 192.168.1.1:9876: connect: connection refused
pod からは ping が通る
$ kubectl -n=kube-system exec -it octavia-ingress-controller-0 -- /bin/sh
/ # ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1): 56 data bytes
64 bytes from 192.168.1.1: seq=0 ttl=64 time=0.354 ms
64 bytes from 192.168.1.1: seq=1 ttl=64 time=0.843 ms
上記は octavia へのアクセス(Get http://iaas-ctrl:9876/v2.0/lbaas/loadbalancers)が失敗している。 -> octavia- api が落ちているのが原因。再起動で対応する。そろそろ、根本対処しないと駄目そう。 Octavia 側では LB が作られるようになった
$ openstack loadbalancer list
127.0.0.1 - - [11/Sep/2019 17:11:07] "GET /v2.0/lbaas/loadbalancers HTTP/1.1" 200 841
+--------------------------------------+--------------------------------------------------------+----------------------------------+---------------+---------------------+----------+
| id | name | project_id | vip_address | provisioning_status | provider |
+--------------------------------------+--------------------------------------------------------+----------------------------------+---------------+---------------------+----------+
| 0ba04bae-96c8-4c23-8b4a-57d0287288d8 | kube_ingress_k8s-20190910_default_test-octavia-ingress | 682e74f275fe427abd9eb6759f3b68c5 | 192.168.1.101 | ACTIVE | octavia |
+--------------------------------------+--------------------------------------------------------+----------------------------------+---------------+---------------------+----------+
しかし、まだ ingress の ADDRESS がわりあてられていない。
$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
test-octavia-ingress api.sample.com 80 92m
controller のログを確認
time="2019-09-12T00:07:58Z" level=info msg="Using config file" file=/etc/config/octavia-ingress-controller-config.yaml
W0912 00:07:58.262400 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0912 00:07:58.605151 1 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"test-octavia-ingress", UID:"1869006e-0049-42f3-8ec5-fd744056e102", APIVersion:"extensions/v1beta1", ResourceVersion:"5918470", FieldPath:""}): type: 'Normal' reason: 'Creating' Ingress default/test-octavia-ingress
time="2019-09-12T00:07:58Z" level=info msg="ingress controller synced and ready"
time="2019-09-12T00:07:59Z" level=info msg="ingress created, will create openstack resources" ingress=default/test-octavia-ingress
time="2019-09-12T00:08:04Z" level=info msg="loadbalancer created" ID=0ba04bae-96c8-4c23-8b4a-57d0287288d8 name=kube_ingress_k8s-20190910_default_test-octavia-ingress
time="2019-09-12T00:09:00Z" level=info msg="listener created" lb=0ba04bae-96c8-4c23-8b4a-57d0287288d8 listenerName=kube_ingress_k8s-20190910_default_test-octavia-ingress
time="2019-09-12T00:09:04Z" level=info msg="pool created" lb=0ba04bae-96c8-4c23-8b4a-57d0287288d8 listenerID= pooID=665f8b4d-1bb7-431d-ab55-d0762d892bb7 poolName=38df6e716b70f28ecfa4508c7e5c21dc18b1b8cc97c610ab4111c3237507fbfb
E0912 00:09:05.380749 1 controller.go:449] failed to create openstack resources for ingress default/test-octavia-ingress: error batch updating members for pool 665f8b4d-1bb7-431d-ab55-d0762d892bb7: Bad request with: [PUT http://iaas-ctrl:9876/v2.0/lbaas/pools/665f8b4d-1bb7-431d-ab55-d0762d892bb7/members], error message: {"debuginfo": null, "faultcode": "Client", "faultstring": "Unknown attribute for argument member_: members"}
I0912 00:09:05.380826 1 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"test-octavia-ingress", UID:"1869006e-0049-42f3-8ec5-fd744056e102", APIVersion:"extensions/v1beta1", ResourceVersion:"5918470", FieldPath:""}): type: 'Warning' reason: 'Failed' Failed to create openstack resources for ingress default/test-octavia-ingress: error batch updating members for pool 665f8b4d-1bb7-431d-ab55-d0762d892bb7: Bad request with: [PUT http://iaas-ctrl:9876/v2.0/lbaas/pools/665f8b4d-1bb7-431d-ab55-d0762d892bb7/members], error message: {"debuginfo": null, "faultcode": "Client", "faultstring": "Unknown attribute for argument member_: members"}
Octavia の pools API に Member 追加しようとしたところでエラーになっている
"Unknown attribute for argument member_: members"
https://docs.openstack.org/api-ref/load-balancer/v2/#batch-update-members
tcpdump でパケットの中身を見ないと駄目っぽい。
HTTP400 エラーの際、octavia-ingress-controller 側で Request Body の中身をエラーメッセージに含ませるようにしたほうがデバッグがしやすいと思い調べてみた。 OpenStack へのAPI操作は gophercloud ライブラリを使っており、上記エラーを出そうとするとそのライブラリを変更する必要がある。 現状のコードでは Request Body の内容を保持する構造体要素が無いため、上記のように変更することは難しい。 e.Body は Response Body であり OpenStack APIから返ってきたエラーメッセージが含まれている。 その構造体名の通り、Request Body に対応するものは存在しない。 gophercloud/errors.go
145 func (e ErrDefault400) Error() string {
146 e.DefaultErrString = fmt.Sprintf(
147 "Bad request with: [%s %s], error message: %s",
148 e.Method, e.URL, e.Body,
149 )
150 return e.choseErrString()
151 }
手元のWin7 マシンで Wireshark が動かなくなっていたので tcpdump で内容を見てみた。
→ Octavia API 的には変なパラメータを渡しているようには見えない。 {"members":[{"address":"192.168.1.104","protocol_port":31818}]}
$ sudo tcpdump -i brqbfd9fd43-c9 port 9876 -w tcp.dump
...
$ tcpdump -r tcp.dump -A
...
PUT /v2.0/lbaas/pools/13754680-37fc-44e4-84d2-3f997d1e9759/members
HTTP/1.1
Host: iaas-ctrl:9876
User-Agent: gophercloud/2.0.0
Connection: close
Content-Length: 63
Accept: application/json
Content-Type: application/json
X-Auth-Token: gAAAAABderTmEn8kSl5_yuTP4089vH4VOzrMWuk0NU9L4qXjHgIVoopLPQvdtqeC7byAj8LDBGxe3WkV3LfAiaGUGhLNmwlmGfnSYMInxX9yKUvbq94EKpMkx90eNYoHKA3S7h4phsmmN3LUSMu_CHfU72sN5MtsNonYe2cS0JUllpUSQVx9ITA
Accept-Encoding: gzip
{"members":[{"address":"192.168.1.104","protocol_port":31818}]}
...
ate: Thu, 12 Sep 2019 21:13:21 GMT
Server: WSGIServer/0.1 Python/2.7.12
Content-Length: 108
Content-Type: application/json
x-openstack-request-id: req-9dd5fc4a-2c08-4ce2-ab2a-c282f39f3f89
{"debuginfo": null, "faultcode": "Client", "faultstring": "Unknown attribute for argument member_: members"}
openstack コマンドで同等のことを試してみる。
参考: https://github.com/oomichi/try-kubernetes/issues/93#issuecomment-529726869
API の内容から --subnet-id provider
はコマンド実行から削除してみる。
→ コマンド実行としては成功したが、これは PUT ではなく POSTオペレーションになっている。
$ openstack loadbalancer member create --address 192.168.1.104 --protocol-port 31818 13754680-37fc-44e4-84d2-3f997d1e9759
127.0.0.1 - - [12/Sep/2019 15:18:45] "GET /v2.0/lbaas/pools HTTP/1.1" 200 564
127.0.0.1 - - [12/Sep/2019 15:18:46] "POST /v2.0/lbaas/pools/13754680-37fc-44e4-84d2-3f997d1e9759/members HTTP/1.1" 201 407
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| address | 192.168.1.104 |
| admin_state_up | True |
| created_at | 2019-09-12T22:18:45 |
| id | 8af576af-8281-45ba-98ed-78275594c1ab |
| name | |
| operating_status | NO_MONITOR |
| project_id | 682e74f275fe427abd9eb6759f3b68c5 |
| protocol_port | 31818 |
| provisioning_status | PENDING_CREATE |
| subnet_id | None |
| updated_at | None |
| weight | 1 |
| monitor_port | None |
| monitor_address | None |
+---------------------+--------------------------------------+
とりあえず、はまったので cloud-provider-openstack/issues/759 として登録した。
octavia-ingress-controller のコードからすると PUT を使っているのは複数 Members を追加するため? POST だと1人の Member しか追加できない。 https://docs.openstack.org/api-ref/load-balancer/v2/?expanded=create-member-detail#create-member
pkg/ingress/controller/openstack/octavia.go
456 // Batch update pool members
457 var members []pools.BatchUpdateMemberOpts
458 for _, node := range nodes {
459 addr, err := getNodeAddressForLB(node)
460 if err != nil {
461 // Node failure, do not create member
462 log.WithFields(log.Fields{"node": node.Name, "poolName": poolName, "pooID": pool.ID, "error": err}).Warn("failed to create LB pool member for node")
463 continue
464 }
465
466 member := pools.BatchUpdateMemberOpts{
467 Address: addr,
468 ProtocolPort: *nodePort,
469 }
470 members = append(members, member)
471 }
472 // only allow >= 1 members or it will lead to openstack octavia issue
473 if len(members) == 0 {
474 return nil, fmt.Errorf("error because no members in pool: %s", pool.ID)
475 }
476
477 if err := pools.BatchUpdateMembers(os.octavia, pool.ID, members).ExtractErr(); err != nil {
478 return nil, fmt.Errorf("error batch updating members for pool %s: %v", pool.ID, err)
479 }
そもそもOctavia 連携に成功した人は Ingress ではなく、Service Type: LoadBlancer で連携させていた。 https://github.com/oomichi/try-kubernetes/issues/95 で管理する。
Octavia がそもそも古いという問題あり(現在のバージョンが Pike) アップグレードを試してみる。
$ sudo pip show octavia
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
Name: octavia
Version: 1.0.4
Summary: OpenStack Octavia Scalable Load Balancer as a Service
Home-page: https://docs.openstack.org/octavia/latest/
Author: OpenStack
Author-email: openstack-dev@lists.openstack.org
License: UNKNOWN
Location: /usr/local/lib/python2.7/dist-packages
Requires: pyroute2, ipaddress, cotyledon, oslo.messaging, python-novaclient, gunicorn, cryptography, oslo.middleware, oslo.log, pyOpenSSL, requests, keystoneauth1, pyasn1-modules, python-neutronclient, WSME, six, diskimage-builder, Jinja2, python-glanceclient, alembic, oslo.policy, oslo.context, Babel, oslo.config, WebOb, python-barbicanclient, oslo.utils, oslo.reports, keystonemiddleware, netifaces, pecan, oslo.db, pbr, Flask, SQLAlchemy, pyasn1, stevedore, PyMySQL, taskflow, oslo.i18n, rfc3986
Required-by:
v1.0.4 が Pike であることは https://github.com/openstack/octavia/releases/tag/1.0.4 からわかる。
なお、Ubuntu 18.04 では引き続き Octavia パッケージを提供していないため、pip でのアップデートを行う。
$ sudo pip list -o
Package Version Latest Type
---------------------- ------------ --------- -----
...
octavia 1.0.4 4.0.1 wheel
...
よって、4.0.1 (Stein) にバージョンアップ可能である事がわかる。 バージョンアップを実施
$ sudo pip install -U octavia
バージョンアップしたところ、以下のエラーで octavia-api が立上がらなくなった
2019-09-17 16:20:53.123 7153 ERROR octavia.api.drivers.driver_factory [-] Unable to load provider driver amphora due to: __init__() got an unexpected keyword argument 'call_monitor_timeout'
2019-09-17 16:20:53.123 7153 CRITICAL octavia [-] Unhandled error: ProviderNotFound: Provider 'amphora' was not found.
2019-09-17 16:20:53.123 7153 ERROR octavia Traceback (most recent call last):
2019-09-17 16:20:53.123 7153 ERROR octavia File "/usr/local/bin/octavia-api", line 10, in <module>
2019-09-17 16:20:53.123 7153 ERROR octavia sys.exit(main())
2019-09-17 16:20:53.123 7153 ERROR octavia File "/usr/local/lib/python2.7/dist-packages/octavia/cmd/api.py", line 32, in main
2019-09-17 16:20:53.123 7153 ERROR octavia app = api_app.setup_app(argv=sys.argv)
2019-09-17 16:20:53.123 7153 ERROR octavia File "/usr/local/lib/python2.7/dist-packages/octavia/api/app.py", line 50, in setup_app
2019-09-17 16:20:53.123 7153 ERROR octavia _init_drivers()
2019-09-17 16:20:53.123 7153 ERROR octavia File "/usr/local/lib/python2.7/dist-packages/octavia/api/app.py", line 42, in _init_drivers
2019-09-17 16:20:53.123 7153 ERROR octavia driver_factory.get_driver(provider)
2019-09-17 16:20:53.123 7153 ERROR octavia File "/usr/local/lib/python2.7/dist-packages/octavia/api/drivers/driver_factory.py", line 49, in get_driver
2019-09-17 16:20:53.123 7153 ERROR octavia raise exceptions.ProviderNotFound(prov=provider)
2019-09-17 16:20:53.123 7153 ERROR octavia ProviderNotFound: Provider 'amphora' was not found.
2019-09-17 16:20:53.123 7153 ERROR octavia
対象の実装コード
40 try:
41 driver = stevedore_driver.DriverManager(
42 namespace='octavia.api.drivers',
43 name=provider,
44 invoke_on_load=True).driver
45 driver.name = provider
46 except Exception as e:
47 LOG.error('Unable to load provider driver %s due to: %s',
48 provider, e)
49 raise exceptions.ProviderNotFound(prov=provider)
大元の exception は __init__() got an unexpected keyword argument 'call_monitor_timeout'
https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#load-balancer https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/using-octavia-ingress-controller.md