Closed ctoomey closed 3 years ago
@ctoomey thanks for reporting this issue! Did you try to simply reexecute loft start
? My guess is that somehow the loft pod has restarted or the port forwarding got interrupted. Rerunning loft start
should fix that.
If that doesn't help you can check the following things:
helm install
?@ctoomey thanks for reporting this issue! Did you try to simply reexecute
loft start
? My guess is that somehow the loft pod has restarted or the port forwarding got interrupted. Rerunningloft start
should fix that.
Yep I did that, and it re-enabled the port forwarding but didn't fix the underlying chart install problem.
- Is the virtual cluster pod up and running? Could you post the logs of the syncer container here?
Yes, syncer logs below.
I0201 19:17:34.871874 1 main.go:200] Using physical cluster at https://10.96.0.1:443
I0201 19:17:35.166109 1 loghelper.go:28] Start services sync controller
I0201 19:17:35.169736 1 loghelper.go:28] Start secrets sync controller
I0201 19:17:35.211158 1 loghelper.go:28] Start pods sync controller
I0201 19:17:35.213354 1 loghelper.go:28] Start events sync controller
I0201 19:17:35.213450 1 loghelper.go:28] Start nodes sync controller
I0201 19:17:35.213840 1 loghelper.go:28] Start persistentvolumes sync controller
I0201 19:17:35.214337 1 loghelper.go:28] Start storageclasses sync controller
I0201 19:17:35.214740 1 loghelper.go:28] Start configmaps sync controller
I0201 19:17:35.216830 1 loghelper.go:28] Start endpoints sync controller
I0201 19:17:35.218318 1 loghelper.go:28] Start persistentvolumeclaims sync controller
I0201 19:17:35.219674 1 loghelper.go:28] Start ingresses sync controller
I0201 19:17:35.221547 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting EventSource source &source.Kind{Type:(*v1.Service)(0xc00023c6c0), cache:(*cache.informerCache)(0xc0001268c8)}
I0201 19:17:35.222269 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting EventSource source &source.Kind{Type:(*v1.Secret)(0xc000657540), cache:(*cache.informerCache)(0xc0001268c8)}
I0201 19:17:35.222840 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000270c00), cache:(*cache.informerCache)(0xc0001268c8)}
I0201 19:17:35.223201 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Event: controller: event-backward: Starting EventSource source &source.Kind{Type:(*v1.Event)(0xc0004ed680), cache:(*cache.informerCache)(0xc0001268c8)}
I0201 19:17:35.223455 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting EventSource source &source.Kind{Type:(*v1.ConfigMap)(0xc000657b80), cache:(*cache.informerCache)(0xc0001268c8)}
I0201 19:17:35.223792 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting EventSource source &source.Kind{Type:(*v1.Endpoints)(0xc00004e140), cache:(*cache.informerCache)(0xc0001268c8)}
I0201 19:17:35.224042 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolumeClaim)(0xc0008e0540), cache:(*cache.informerCache)(0xc0001268c8)}
I0201 19:17:35.224363 1 controller.go:158] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting EventSource source &source.Kind{Type:(*v1beta1.Ingress)(0xc00062d380), cache:(*cache.informerCache)(0xc0001268c8)}
W0201 19:17:35.259882 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:17:35.263478 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:17:35.268337 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:17:35.278861 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
I0201 19:17:35.322882 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc0005165e0), run:(*generic.backwardController)(0xc000266de0), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.325058 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting Controller
I0201 19:17:35.325510 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting workers worker count 1
I0201 19:17:35.325872 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &source.Kind{Type:(*v1.ConfigMap)(0xc000657a40), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.326124 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc000058c50), run:(*generic.forwardController)(0xc000961ce0), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.326232 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc00006d000), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.323411 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc000a1dae0), run:(*generic.backwardController)(0xc000089a40), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.326557 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting Controller
I0201 19:17:35.326745 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting workers worker count 1
I0201 19:17:35.323545 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Event: controller: event-backward: Starting Controller
I0201 19:17:35.327233 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind Event: controller: event-backward: Starting workers worker count 1
I0201 19:17:35.327450 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting Controller
I0201 19:17:35.327754 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting workers worker count 1
I0201 19:17:35.328086 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting EventSource source &source.Kind{Type:(*v1.Endpoints)(0xc000657e00), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.328728 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc00054e5d0), run:(*generic.forwardController)(0xc0009fc120), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.328772 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting Controller
I0201 19:17:35.329094 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting workers worker count 1
I0201 19:17:35.324010 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc000058db0), run:(*generic.backwardController)(0xc000961da0), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.329686 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting Controller
I0201 19:17:35.329847 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting workers worker count 1
I0201 19:17:35.324037 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting EventSource source &source.Kind{Type:(*v1.Service)(0xc000543d40), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.330336 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc000a1da30), run:(*generic.forwardController)(0xc000089980), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.330685 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting Controller
I0201 19:17:35.329867 1 controller.go:158] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting EventSource source &source.Kind{Type:(*v1beta1.Ingress)(0xc00062d200), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.324169 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc0008acc10), run:(*generic.backwardController)(0xc0008beae0), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.324198 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(*v1.Secret)(0xc0006572c0), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.324514 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000270000), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.324535 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc000372140), run:(*generic.backwardController)(0xc000ab84e0), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.324804 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting EventSource source &source.Kind{Type:(*v1.Node)(0xc0004e6900), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.324913 1 controller.go:158] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc0003419f0), run:(*generic.backwardController)(0xc000bea780), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.324975 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolume)(0xc0004ed900), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.329482 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting EventSource source &source.Kind{Type:(*v1.PersistentVolumeClaim)(0xc0008e0380), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.324128 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc00054e690), run:(*generic.backwardController)(0xc0009fc1e0), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.332374 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting Controller
I0201 19:17:35.332410 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting workers worker count 1
I0201 19:17:35.332549 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting workers worker count 1
I0201 19:17:35.332837 1 controller.go:158] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc000341570), run:(*generic.forwardController)(0xc000bea6c0), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.332995 1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting Controller
I0201 19:17:35.333180 1 controller.go:192] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting workers worker count 1
I0201 19:17:35.333308 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting Controller
I0201 19:17:35.333416 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting workers worker count 1
I0201 19:17:35.333710 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc000516470), run:(*generic.forwardController)(0xc000266c00), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.333837 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(*v1beta1.Ingress)(0xc00062cf00), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.333956 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(*v1.Pod)(0xc000869800), cache:(*cache.informerCache)(0xc0003ca0f0)}
I0201 19:17:35.334097 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting Controller
I0201 19:17:35.334257 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting workers worker count 1
I0201 19:17:35.334462 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc0008acb60), run:(*generic.forwardController)(0xc0008bea20), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.334530 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting Controller
I0201 19:17:35.334678 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting workers worker count 1
I0201 19:17:35.334907 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting Controller
I0201 19:17:35.334973 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting workers worker count 1
I0201 19:17:35.335513 1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting Controller
I0201 19:17:35.335641 1 controller.go:192] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting workers worker count 1
I0201 19:17:35.335903 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc00054ff80), run:(*generic.forwardController)(0xc000ab8420), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.335965 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting Controller
I0201 19:17:35.370741 1 main.go:333] Generating serving cert for service ip: 10.96.227.28
I0201 19:17:35.436073 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc0008ace10), run:(*generic.fakeSyncer)(0xc0009dc440), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.436272 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting Controller
I0201 19:17:35.436427 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting workers worker count 1
I0201 19:17:35.436724 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting workers worker count 1
I0201 19:17:35.437451 1 controller.go:158] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting EventSource source &garbagecollect.Source{Period:60000000000, log:(*loghelper.logger)(0xc0008acf90), run:(*generic.fakeSyncer)(0xc0009dcdc0), stopChan:(<-chan struct {})(0xc0008a4fc0)}
I0201 19:17:35.437728 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting Controller
I0201 19:17:35.438086 1 controller.go:192] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting workers worker count 1
I0201 19:17:35.721158 1 server.go:156] Starting tls proxy server at 0.0.0.0:8443
I0201 19:17:35.724261 1 secure_serving.go:197] Serving securely on [::]:8443
I0201 19:17:35.725105 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0201 19:17:35.725501 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/virtualcluster/tls/serving-tls.crt::/var/lib/virtualcluster/tls/serving-tls.key
I0201 19:17:35.726094 1 dynamic_cafile_content.go:167] Starting request-header::/data/server/tls/request-header-ca.crt
I0201 19:17:35.726743 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/data/server/tls/client-ca.crt
E0201 19:17:40.864829 1 controller.go:267] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: name coredns-66c464876b-8jpwq-x-kube-system-x-chris1 namespace vcluster-chris1: Reconciler error Operation cannot be fulfilled on pods "coredns-66c464876b-8jpwq": the object has been modified; please apply your changes to the latest version and try again
E0201 19:19:20.350182 1 controller.go:267] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: name rmq-bitnami-rabbitmq-0-x-default-x-chris1 namespace vcluster-chris1: Reconciler error Operation cannot be fulfilled on pods "rmq-bitnami-rabbitmq-0": the object has been modified; please apply your changes to the latest version and try again
W0201 19:24:23.773774 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:26:38.643693 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:32:35.223837 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:34:13.072742 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:34:14.029487 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:34:14.032837 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:40:34.671752 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:40:35.818764 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:40:35.821341 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:42:19.581557 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
E0201 19:44:52.336811 1 controller.go:267] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: name rmq-bitnami-rabbitmq-0-x-default-x-chris1 namespace vcluster-chris1: Reconciler error Operation cannot be fulfilled on pods "rmq-bitnami-rabbitmq-0": the object has been modified; please apply your changes to the latest version and try again
W0201 19:50:01.160192 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:50:02.655328 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:50:02.657846 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0201 19:51:43.919022 1 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
- What is the exact error message you get during
helm install
?
If I don't give the --wait
option, helm returns without error. Otherwise: Error: timed out waiting for the condition
. But the RMQ pod never successfully starts after repeated restarts.
Here's the output for kubectl logs rmq-bitnami-rabbitmq-0
:
rabbitmq 20:02:16.91
rabbitmq 20:02:16.91 Welcome to the Bitnami rabbitmq container
rabbitmq 20:02:16.91 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-rabbitmq
rabbitmq 20:02:16.91 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-rabbitmq/issues
rabbitmq 20:02:16.91
rabbitmq 20:02:16.92 INFO ==> ** Starting RabbitMQ setup **
rabbitmq 20:02:16.96 INFO ==> Validating settings in RABBITMQ_* env vars..
rabbitmq 20:02:17.00 INFO ==> Initializing RabbitMQ...
rabbitmq 20:02:17.04 INFO ==> Starting RabbitMQ in background...
rabbitmq 20:04:11.00 ERROR ==> Couldn't start RabbitMQ in background.
Here's the output for kubectl describe pod rmq-bitnami-rabbitmq-0
Name: rmq-bitnami-rabbitmq-0
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Mon, 01 Feb 2021 11:19:20 -0800
Labels: app.kubernetes.io/instance=rmq-bitnami
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=rabbitmq
controller-revision-hash=rmq-bitnami-rabbitmq-5f74f65f45
helm.sh/chart=rabbitmq-8.9.0
statefulset.kubernetes.io/pod-name=rmq-bitnami-rabbitmq-0
Annotations: checksum/config: fb5cccca714d1a2c25691412930f765c696c440cc9751cd73a93449ae4122b67
checksum/secret: 32c3ad32eb1715bac5810aa019155e865bc83d2dcb4901547952c90e9c24e5c8
Status: Running
IP: 10.1.0.32
IPs:
IP: 10.1.0.32
Controlled By: StatefulSet/rmq-bitnami-rabbitmq
Containers:
rabbitmq:
Container ID: docker://0a88e9f83c4ac4007bef58f76b355427149a6f241d04c5c8444d35a186a819ec
Image: docker.io/bitnami/rabbitmq:3.8.11-debian-10-r0
Image ID: docker-pullable://bitnami/rabbitmq@sha256:ae4e7ab2049ed38b5165820a6235453b09faf633956c7ff5cfeffe330efaacab
Ports: 5672/TCP, 25672/TCP, 15672/TCP, 4369/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Mon, 01 Feb 2021 11:28:36 -0800
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 01 Feb 2021 11:25:51 -0800
Finished: Mon, 01 Feb 2021 11:27:46 -0800
Ready: False
Restart Count: 4
Liveness: exec [/bin/bash -ec rabbitmq-diagnostics -q ping] delay=120s timeout=20s period=30s #success=1 #failure=6
Readiness: exec [/bin/bash -ec rabbitmq-diagnostics -q check_running && rabbitmq-diagnostics -q check_local_alarms] delay=10s timeout=20s period=30s #success=1 #failure=3
Environment:
BITNAMI_DEBUG: false
MY_POD_IP: (v1:status.podIP)
MY_POD_NAME: rmq-bitnami-rabbitmq-0 (v1:metadata.name)
MY_POD_NAMESPACE: default (v1:metadata.namespace)
K8S_SERVICE_NAME: rmq-bitnami-rabbitmq-headless
K8S_ADDRESS_TYPE: hostname
RABBITMQ_FORCE_BOOT: no
RABBITMQ_NODE_NAME: rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
K8S_HOSTNAME_SUFFIX: .$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local
RABBITMQ_MNESIA_DIR: /bitnami/rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)
RABBITMQ_LDAP_ENABLE: no
RABBITMQ_LOGS: -
RABBITMQ_ULIMIT_NOFILES: 65536
RABBITMQ_USE_LONGNAME: true
RABBITMQ_ERL_COOKIE: <set to the key 'rabbitmq-erlang-cookie' in secret 'rmq-bitnami-rabbitmq'> Optional: false
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD: <set to the key 'rabbitmq-password' in secret 'rmq-bitnami-rabbitmq'> Optional: false
RABBITMQ_PLUGINS: rabbitmq_management, rabbitmq_peer_discovery_k8s, rabbitmq_auth_backend_ldap
Mounts:
/bitnami/rabbitmq/conf from configuration (rw)
/bitnami/rabbitmq/mnesia from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from rmq-bitnami-rabbitmq-token-wx5rf (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-rmq-bitnami-rabbitmq-0
ReadOnly: false
configuration:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rmq-bitnami-rabbitmq-config
Optional: false
rmq-bitnami-rabbitmq-token-wx5rf:
Type: Secret (a volume populated by a Secret)
SecretName: rmq-bitnami-rabbitmq-token-wx5rf
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling <unknown> 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled <unknown> Successfully assigned default/rmq-bitnami-rabbitmq-0 to docker-desktop
Warning Unhealthy 9m37s kubelet, docker-desktop Readiness probe failed: Error: unable to perform an operation on node 'rabbit@rmq-bitnami-rabbitmq-0.rmq-bitnami-rabbitmq-headless.default.svc.cluster.local'. Please see diagnostics information and suggestions below.
Most common reasons for this are:
* Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
* CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
* Target node is not running
In addition to the diagnostics info below:
* See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
* Consult server logs on node rabbit@rmq-bitnami-rabbitmq-0.rmq-bitnami-rabbitmq-headless.default.svc.cluster.local
* If target node is configured to use long node names, don't forget to use --longnames with CLI tools
DIAGNOSTICS
===========
attempted to contact: ['rabbit@rmq-bitnami-rabbitmq-0.rmq-bitnami-rabbitmq-headless.default.svc.cluster.local']
rabbit@rmq-bitnami-rabbitmq-0.rmq-bitnami-rabbitmq-headless.default.svc.cluster.local:
* unable to connect to epmd (port 4369) on rmq-bitnami-rabbitmq-0.rmq-bitnami-rabbitmq-headless.default.svc.cluster.local: nxdomain (non-existing domain)
Current node details:
* node name: 'rabbitmqcli-473-rabbit@rmq-bitnami-rabbitmq-0.rmq-bitnami-rabbitmq-headless.default.svc.cluster.local'
* effective user's home directory: /opt/bitnami/rabbitmq/.rabbitmq
* Erlang cookie hash: 8I5zYbDjG+3DShrIAVg7mw==
Warning Unhealthy 9m7s kubelet, docker-desktop Readiness probe failed: Error: unable to perform an operation on node 'rabbit@rmq-bitnami-rabbitmq-0.rmq-bitnami-rabbitmq-headless.default.svc.cluster.local'. Please see diagnostics information and suggestions below.
- Could you post the logs of the loft container here?
W0201 19:44:20.845818 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:44:20.855405 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:44:20.866205 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:44:20.898389 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:44:20.914791 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
I0201 19:44:30.744715 1 loghelper.go:34] start loft-cluster/vcluster-chris1/chris1 port forwarder on port 10000
W0201 19:46:17.076543 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:46:17.087227 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:46:17.098675 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:46:17.121997 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:46:17.131995 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:47:32.950520 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:47:32.964260 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:47:32.981527 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:47:33.026360 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:47:33.041752 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:47:34.732479 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:47:34.750062 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:47:34.768258 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:47:34.806278 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0201 19:47:34.820851 1 warnings.go:67] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
@ctoomey thanks for providing the information! Good news, we could figure out the problem, which is the same as in #84 and will be fixed in the next version:
When you create a statefulset in kubernetes, the created pods of the statefulset will have the spec.subdomain
option set to the name of the headless service. This subdomain will basically create an entry of the form POD_NAME.SUBDOMAIN.NAMESPACE.svc.cluster.local
in the /etc/hosts
file of a container. If you run hostname -f
it will yield the above domain. This is so far normal kubernetes behaviour.
In the virtual cluster however, we cannot forward the subdomain option to the physical cluster, because that would lead to a wrong /etc/hosts
entry since kubernetes would write the actual physical namespace in there instead of the virtual cluster namespace. Thats why we currently just empty that option and pass it to the physical cluster. This is usually not a problem, except for applications that specifically need that fully qualified hostname inside the container and either want to bind to it or want to find it out via hostname -f
. In this case such problems as you experience occur.
In the next version we will override the /etc/hosts
file for each container in a pod that specifies the spec.subdomain
option automatically with the correct entries in an init container, which should fix that issue.
Great, thanks @FabianKramm
@FabianKramm I downloaded your v1.7.0-beta.1 release which looks like should have fixed this, but see the same error/behavior when installing the chart.
@ctoomey thanks for checking out the new release! For me it works with the new release in local docker kubernetes as well as GKE. Make sure you create a new virtual cluster that has syncer version 0.0.21, otherwise the old vCluster is still executed.
I did the following in a fresh GKE cluster which works for me with loft v1.7.0-beta.1:
loft start -v 1.7.0-beta.1
loft login --insecure https://...
loft create vcluster test-rabbitmq
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release bitnami/rabbitmq --wait
After a minute or so waiting I now get:
WARNING: "kubernetes-charts.storage.googleapis.com" is deprecated for "stable" and will be deleted Nov. 13, 2020.
WARNING: You should switch to "https://charts.helm.sh/stable" via:
WARNING: helm repo add "stable" "https://charts.helm.sh/stable" --force-update
NAME: my-release
LAST DEPLOYED: Tue Feb 9 13:38:26 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
Credentials:
echo "Username : user"
echo "Password : $(kubectl get secret --namespace default my-release-rabbitmq -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)"
echo "ErLang Cookie : $(kubectl get secret --namespace default my-release-rabbitmq -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)"
RabbitMQ can be accessed within the cluster on port at my-release-rabbitmq.default.svc.
To access for outside the cluster, perform the following steps:
To Access the RabbitMQ AMQP port:
echo "URL : amqp://127.0.0.1:5672/"
kubectl port-forward --namespace default svc/my-release-rabbitmq 5672:5672
To Access the RabbitMQ Management interface:
echo "URL : http://127.0.0.1:15672/"
kubectl port-forward --namespace default svc/my-release-rabbitmq 15672:15672
Pod is also running fine:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
my-release-rabbitmq-0 1/1 Running 0 2m32s
Container logs:
$ kubectl logs my-release-rabbitmq-0
12:38:48.59
12:38:48.59 Welcome to the Bitnami rabbitmq container
12:38:48.59 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-rabbitmq
12:38:48.59 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-rabbitmq/issues
12:38:48.59
12:38:48.59 INFO ==> ** Starting RabbitMQ setup **
12:38:48.65 INFO ==> Validating settings in RABBITMQ_* env vars..
12:38:48.67 INFO ==> Initializing RabbitMQ...
12:38:48.69 INFO ==> Starting RabbitMQ in background...
12:39:02.64 INFO ==> Stopping RabbitMQ...
12:39:05.61 INFO ==> ** RabbitMQ setup finished! **
12:39:05.63 INFO ==> ** Starting RabbitMQ **
Configuring logger redirection
2021-02-09 12:39:17.817 [debug] <0.284.0> Lager installed handler error_logger_lager_h into error_logger
2021-02-09 12:39:17.847 [debug] <0.287.0> Lager installed handler lager_forwarder_backend into error_logger_lager_event
2021-02-09 12:39:17.847 [debug] <0.290.0> Lager installed handler lager_forwarder_backend into rabbit_log_lager_event
2021-02-09 12:39:17.847 [debug] <0.293.0> Lager installed handler lager_forwarder_backend into rabbit_log_channel_lager_event
2021-02-09 12:39:17.847 [debug] <0.296.0> Lager installed handler lager_forwarder_backend into rabbit_log_connection_lager_event
2021-02-09 12:39:17.847 [debug] <0.299.0> Lager installed handler lager_forwarder_backend into rabbit_log_feature_flags_lager_event
2021-02-09 12:39:17.848 [debug] <0.302.0> Lager installed handler lager_forwarder_backend into rabbit_log_federation_lager_event
2021-02-09 12:39:17.848 [debug] <0.305.0> Lager installed handler lager_forwarder_backend into rabbit_log_ldap_lager_event
2021-02-09 12:39:17.848 [debug] <0.308.0> Lager installed handler lager_forwarder_backend into rabbit_log_mirroring_lager_event
2021-02-09 12:39:17.848 [debug] <0.311.0> Lager installed handler lager_forwarder_backend into rabbit_log_prelaunch_lager_event
2021-02-09 12:39:17.849 [debug] <0.314.0> Lager installed handler lager_forwarder_backend into rabbit_log_queue_lager_event
2021-02-09 12:39:17.849 [debug] <0.317.0> Lager installed handler lager_forwarder_backend into rabbit_log_ra_lager_event
2021-02-09 12:39:17.849 [debug] <0.320.0> Lager installed handler lager_forwarder_backend into rabbit_log_shovel_lager_event
2021-02-09 12:39:17.849 [debug] <0.323.0> Lager installed handler lager_forwarder_backend into rabbit_log_upgrade_lager_event
2021-02-09 12:39:17.878 [info] <0.44.0> Application lager started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:18.317 [debug] <0.280.0> Lager installed handler lager_backend_throttle into lager_event
2021-02-09 12:39:22.535 [info] <0.44.0> Application mnesia started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:22.539 [info] <0.269.0>
Starting RabbitMQ 3.8.9 on Erlang 22.3
Copyright (c) 2007-2020 VMware, Inc. or its affiliates.
Licensed under the MPL 2.0. Website: https://rabbitmq.com
## ## RabbitMQ 3.8.9
## ##
########## Copyright (c) 2007-2020 VMware, Inc. or its affiliates.
###### ##
########## Licensed under the MPL 2.0. Website: https://rabbitmq.com
Doc guides: https://rabbitmq.com/documentation.html
Support: https://rabbitmq.com/contact.html
Tutorials: https://rabbitmq.com/getstarted.html
Monitoring: https://rabbitmq.com/monitoring.html
Logs: <stdout>
Config file(s): /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf
Starting broker...2021-02-09 12:39:22.541 [info] <0.269.0>
node : rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local
home dir : /opt/bitnami/rabbitmq/.rabbitmq
config file(s) : /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf
cookie hash : tUSTZ3DjV8S7GlQpm+L63g==
log(s) : <stdout>
database dir : /bitnami/rabbitmq/mnesia/rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local
2021-02-09 12:39:22.561 [info] <0.269.0> Running boot step pre_boot defined by app rabbit
2021-02-09 12:39:22.561 [info] <0.269.0> Running boot step rabbit_core_metrics defined by app rabbit
2021-02-09 12:39:22.564 [info] <0.269.0> Running boot step rabbit_alarm defined by app rabbit
2021-02-09 12:39:22.574 [info] <0.421.0> Memory high watermark set to 3186 MiB (3340981043 bytes) of 7965 MiB (8352452608 bytes) total
2021-02-09 12:39:22.586 [info] <0.438.0> Enabling free disk space monitoring
2021-02-09 12:39:22.586 [info] <0.438.0> Disk free limit set to 50MB
2021-02-09 12:39:22.591 [info] <0.269.0> Running boot step code_server_cache defined by app rabbit
2021-02-09 12:39:22.592 [info] <0.269.0> Running boot step file_handle_cache defined by app rabbit
2021-02-09 12:39:22.593 [info] <0.452.0> Limiting to approx 1048479 file handles (943629 sockets)
2021-02-09 12:39:22.593 [info] <0.453.0> FHC read buffering: OFF
2021-02-09 12:39:22.593 [info] <0.453.0> FHC write buffering: ON
2021-02-09 12:39:22.594 [info] <0.269.0> Running boot step worker_pool defined by app rabbit
2021-02-09 12:39:22.594 [info] <0.396.0> Will use 2 processes for default worker pool
2021-02-09 12:39:22.594 [info] <0.396.0> Starting worker pool 'worker_pool' with 2 processes in it
2021-02-09 12:39:22.595 [info] <0.269.0> Running boot step database defined by app rabbit
2021-02-09 12:39:22.598 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2021-02-09 12:39:22.601 [info] <0.269.0> Successfully synced tables from a peer
2021-02-09 12:39:22.601 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2021-02-09 12:39:22.601 [info] <0.269.0> Successfully synced tables from a peer
2021-02-09 12:39:22.628 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2021-02-09 12:39:22.628 [info] <0.269.0> Successfully synced tables from a peer
2021-02-09 12:39:22.645 [info] <0.269.0> Will register with peer discovery backend rabbit_peer_discovery_k8s
2021-02-09 12:39:22.717 [info] <0.269.0> Running boot step database_sync defined by app rabbit
2021-02-09 12:39:22.718 [info] <0.269.0> Running boot step feature_flags defined by app rabbit
2021-02-09 12:39:22.719 [info] <0.269.0> Running boot step codec_correctness_check defined by app rabbit
2021-02-09 12:39:22.719 [info] <0.269.0> Running boot step external_infrastructure defined by app rabbit
2021-02-09 12:39:22.719 [info] <0.269.0> Running boot step rabbit_registry defined by app rabbit
2021-02-09 12:39:22.719 [info] <0.269.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit
2021-02-09 12:39:22.719 [info] <0.269.0> Running boot step rabbit_queue_location_random defined by app rabbit
2021-02-09 12:39:22.720 [info] <0.269.0> Running boot step rabbit_event defined by app rabbit
2021-02-09 12:39:22.720 [info] <0.269.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit
2021-02-09 12:39:22.720 [info] <0.269.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit
2021-02-09 12:39:22.721 [info] <0.269.0> Running boot step rabbit_exchange_type_direct defined by app rabbit
2021-02-09 12:39:22.721 [info] <0.269.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit
2021-02-09 12:39:22.721 [info] <0.269.0> Running boot step rabbit_exchange_type_headers defined by app rabbit
2021-02-09 12:39:22.722 [info] <0.269.0> Running boot step rabbit_exchange_type_topic defined by app rabbit
2021-02-09 12:39:22.722 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit
2021-02-09 12:39:22.722 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit
2021-02-09 12:39:22.722 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit
2021-02-09 12:39:22.722 [info] <0.269.0> Running boot step rabbit_priority_queue defined by app rabbit
2021-02-09 12:39:22.722 [info] <0.269.0> Priority queues enabled, real BQ is rabbit_variable_queue
2021-02-09 12:39:22.723 [info] <0.269.0> Running boot step rabbit_queue_location_client_local defined by app rabbit
2021-02-09 12:39:22.723 [info] <0.269.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit
2021-02-09 12:39:22.723 [info] <0.269.0> Running boot step kernel_ready defined by app rabbit
2021-02-09 12:39:22.723 [info] <0.269.0> Running boot step rabbit_sysmon_minder defined by app rabbit
2021-02-09 12:39:22.725 [info] <0.269.0> Running boot step rabbit_epmd_monitor defined by app rabbit
2021-02-09 12:39:22.726 [info] <0.476.0> epmd monitor knows us, inter-node communication (distribution) port: 25672
2021-02-09 12:39:22.727 [info] <0.269.0> Running boot step guid_generator defined by app rabbit
2021-02-09 12:39:22.731 [info] <0.269.0> Running boot step rabbit_node_monitor defined by app rabbit
2021-02-09 12:39:22.732 [info] <0.480.0> Starting rabbit_node_monitor
2021-02-09 12:39:22.733 [info] <0.269.0> Running boot step delegate_sup defined by app rabbit
2021-02-09 12:39:22.734 [info] <0.269.0> Running boot step rabbit_memory_monitor defined by app rabbit
2021-02-09 12:39:22.735 [info] <0.269.0> Running boot step core_initialized defined by app rabbit
2021-02-09 12:39:22.735 [info] <0.269.0> Running boot step upgrade_queues defined by app rabbit
2021-02-09 12:39:22.788 [info] <0.269.0> Running boot step rabbit_connection_tracking defined by app rabbit
2021-02-09 12:39:22.789 [info] <0.269.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit
2021-02-09 12:39:22.789 [info] <0.269.0> Running boot step rabbit_exchange_parameters defined by app rabbit
2021-02-09 12:39:22.790 [info] <0.269.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit
2021-02-09 12:39:22.791 [info] <0.269.0> Running boot step rabbit_policies defined by app rabbit
2021-02-09 12:39:22.792 [info] <0.269.0> Running boot step rabbit_policy defined by app rabbit
2021-02-09 12:39:22.792 [info] <0.269.0> Running boot step rabbit_queue_location_validator defined by app rabbit
2021-02-09 12:39:22.792 [info] <0.269.0> Running boot step rabbit_quorum_memory_manager defined by app rabbit
2021-02-09 12:39:22.792 [info] <0.269.0> Running boot step rabbit_vhost_limit defined by app rabbit
2021-02-09 12:39:22.793 [info] <0.269.0> Running boot step recovery defined by app rabbit
2021-02-09 12:39:22.795 [info] <0.509.0> Making sure data directory '/bitnami/rabbitmq/mnesia/rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
2021-02-09 12:39:22.801 [info] <0.509.0> Starting message stores for vhost '/'
2021-02-09 12:39:22.801 [info] <0.513.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
2021-02-09 12:39:22.806 [info] <0.509.0> Started message store of type transient for vhost '/'
2021-02-09 12:39:22.807 [info] <0.517.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
2021-02-09 12:39:22.813 [info] <0.509.0> Started message store of type persistent for vhost '/'
2021-02-09 12:39:22.817 [info] <0.269.0> Running boot step empty_db_check defined by app rabbit
2021-02-09 12:39:22.817 [info] <0.269.0> Will not seed default virtual host and user: have definitions to load...
2021-02-09 12:39:22.817 [info] <0.269.0> Running boot step rabbit_looking_glass defined by app rabbit
2021-02-09 12:39:22.817 [info] <0.269.0> Running boot step rabbit_core_metrics_gc defined by app rabbit
2021-02-09 12:39:22.818 [info] <0.269.0> Running boot step background_gc defined by app rabbit
2021-02-09 12:39:22.818 [info] <0.269.0> Running boot step connection_tracking defined by app rabbit
2021-02-09 12:39:22.819 [info] <0.269.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:22.820 [info] <0.269.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:22.821 [info] <0.269.0> Running boot step routing_ready defined by app rabbit
2021-02-09 12:39:22.821 [info] <0.269.0> Running boot step pre_flight defined by app rabbit
2021-02-09 12:39:22.821 [info] <0.269.0> Running boot step notify_cluster defined by app rabbit
2021-02-09 12:39:22.821 [info] <0.269.0> Running boot step networking defined by app rabbit
2021-02-09 12:39:22.821 [info] <0.269.0> Running boot step definition_import_worker_pool defined by app rabbit
2021-02-09 12:39:22.821 [info] <0.396.0> Starting worker pool 'definition_import_pool' with 2 processes in it
2021-02-09 12:39:22.822 [info] <0.269.0> Running boot step cluster_name defined by app rabbit
2021-02-09 12:39:22.823 [info] <0.269.0> Running boot step direct_client defined by app rabbit
2021-02-09 12:39:22.823 [info] <0.44.0> Application rabbit started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:23.498 [info] <0.545.0> Feature flags: list of feature flags found:
2021-02-09 12:39:23.498 [info] <0.545.0> Feature flags: [ ] drop_unroutable_metric
2021-02-09 12:39:23.499 [info] <0.545.0> Feature flags: [ ] empty_basic_get_metric
2021-02-09 12:39:23.499 [info] <0.545.0> Feature flags: [x] implicit_default_bindings
2021-02-09 12:39:23.499 [info] <0.545.0> Feature flags: [x] maintenance_mode_status
2021-02-09 12:39:23.499 [info] <0.545.0> Feature flags: [x] quorum_queue
2021-02-09 12:39:23.499 [info] <0.545.0> Feature flags: [x] virtual_host_metadata
2021-02-09 12:39:23.500 [info] <0.545.0> Feature flags: feature flag states written to disk: yes
2021-02-09 12:39:23.724 [info] <0.545.0> Running boot step rabbit_mgmt_db_handler defined by app rabbitmq_management_agent
2021-02-09 12:39:23.724 [info] <0.545.0> Management plugin: using rates mode 'basic'
2021-02-09 12:39:23.731 [info] <0.44.0> Application rabbitmq_management_agent started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:23.749 [info] <0.44.0> Application cowlib started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:23.765 [info] <0.44.0> Application cowboy started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:23.791 [info] <0.44.0> Application rabbitmq_web_dispatch started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:23.810 [info] <0.44.0> Application amqp_client started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:23.827 [info] <0.545.0> Running boot step rabbit_mgmt_reset_handler defined by app rabbitmq_management
2021-02-09 12:39:23.827 [info] <0.545.0> Running boot step rabbit_management_load_definitions defined by app rabbitmq_management
2021-02-09 12:39:23.864 [info] <0.614.0> Management plugin: HTTP (non-TLS) listener started on port 15672
2021-02-09 12:39:23.864 [info] <0.720.0> Statistics database started.
2021-02-09 12:39:23.864 [info] <0.719.0> Starting worker pool 'management_worker_pool' with 3 processes in it
2021-02-09 12:39:23.865 [info] <0.44.0> Application rabbitmq_management started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:23.966 [info] <0.545.0> Running boot step ldap_pool defined by app rabbitmq_auth_backend_ldap
2021-02-09 12:39:23.966 [info] <0.396.0> Starting worker pool 'ldap_pool' with 64 processes in it
2021-02-09 12:39:23.972 [info] <0.44.0> Application eldap started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:23.972 [warning] <0.796.0> LDAP plugin loaded, but rabbit_auth_backend_ldap is not in the list of auth_backends. LDAP auth will not work.
2021-02-09 12:39:23.973 [info] <0.44.0> Application rabbitmq_auth_backend_ldap started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:23.990 [info] <0.802.0> Peer discovery: enabling node cleanup (will only log warnings). Check interval: 10 seconds.
2021-02-09 12:39:23.991 [info] <0.44.0> Application rabbitmq_peer_discovery_common started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:24.011 [info] <0.545.0> Ready to start client connection listeners
2021-02-09 12:39:24.011 [info] <0.44.0> Application rabbitmq_peer_discovery_k8s started on node 'rabbit@my-release-rabbitmq-0.my-release-rabbitmq-headless.default.svc.cluster.local'
2021-02-09 12:39:24.016 [info] <0.822.0> started TCP listener on [::]:5672
2021-02-09 12:39:24.185 [info] <0.545.0> Server startup complete; 6 plugins started.
* rabbitmq_peer_discovery_k8s
* rabbitmq_peer_discovery_common
* rabbitmq_auth_backend_ldap
* rabbitmq_management
* rabbitmq_web_dispatch
* rabbitmq_management_agent
completed with 6 plugins.
2021-02-09 12:39:24.186 [info] <0.545.0> Resetting node maintenance status
@FabianKramm Weird, I did the same process again (reset k8s cluster, follow the above steps) and this time it did install correctly. Thanks.
Fixed in 1.7.0-beta1
Helm fails when trying to install https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq on a fresh vclusteAlso, during/after installing above,
kubectl get pods
errors with "Unable to connect to the server: net/http: TLS handshake timeout". Also the port forwarding of port 9898 or the underlying pod for the web UI failed -- the browser disconnected and wouldn't reconnect.The same chart installed fine both on straight k8s and in a loft space.
This was on a Macbook Pro running k8s in the latest version of Docker Desktop and a fresh download of
loft
.