Closed nivedhan-murugavel closed 6 years ago
@nivedhan-murugavel can you provide more details? Is your controller-manager pod constantly being restarted because it is failing health checks?
Can you post the kubectl describe
output for the catalog controller manager pod?
You could post your controller manager logs ie output from kubectl logs -l app=catalog-catalog-controller-manager -n catalog
@jboyd01 Thanks for the reply. I am getting below error in the following controller manager log error running controllers: unable to start service-catalog controller: API GroupVersion {"servicecatalog.k8s.io" "v1beta1" "clusterservicebrokers"} is not available;
I0610 07:09:30.930792 1 round_trippers.go:405] GET https://100.64.0.1:443/apis/automationbroker.io/v1 200 OK in 0 milliseconds I0610 07:09:30.930819 1 round_trippers.go:411] Response Headers: I0610 07:09:30.930826 1 round_trippers.go:414] Content-Type: application/json I0610 07:09:30.930830 1 round_trippers.go:414] Content-Length: 804 I0610 07:09:30.930834 1 round_trippers.go:414] Date: Sun, 10 Jun 2018 07:09:30 GMT I0610 07:09:30.930848 1 request.go:874] Response Body: {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"automationbroker.io/v1","resources":[{"name":"jobstates","singularName":"jobstate","namespaced":true,"kind":"JobState","verbs":["delete","deletecollection","get","list","patch","create","update","watch"]},{"name":"servicebindings","singularName":"servicebinding","namespaced":true,"kind":"ServiceBinding","verbs":["delete","deletecollection","get","list","patch","create","update","watch"]},{"name":"serviceinstances","singularName":"serviceinstance","namespaced":true,"kind":"ServiceInstance","verbs":["delete","deletecollection","get","list","patch","create","update","watch"]},{"name":"bundles","singularName":"bundle","namespaced":true,"kind":"Bundle","verbs":["delete","deletecollection","get","list","patch","create","update","watch"]}]}
F0610 07:09:30.931273 1 controller_manager.go:232]error running controllers: unable to start service-catalog controller: API GroupVersion {"servicecatalog.k8s.io" "v1beta1" "clusterservicebrokers"} is not available;
found map[schema.GroupVersionResource]bool{schema.GroupVersionResource{Group:"", Version:"v1", Resource:"namespaces"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"nodes/status"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"persistentvolumes/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta1", Resource:"deployments"}:true, schema.GroupVersionResource{Group:"networking.k8s.io", Version:"v1", Resource:"networkpolicies"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"deployments/status"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"replicasets"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"daemonsets"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"deployments"}:true, schema.GroupVersionResource{Group:"batch", Version:"v1", Resource:"jobs/status"}:true, schema.GroupVersionResource{Group:"automationbroker.io", Version:"v1", Resource:"servicebindings"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta1", Resource:"statefulsets/status"}:true, schema.GroupVersionResource{Group:"authorization.k8s.io", Version:"v1beta1", Resource:"localsubjectaccessreviews"}:true, schema.GroupVersionResource{Group:"authorization.k8s.io", Version:"v1beta1", Resource:"subjectaccessreviews"}:true, schema.GroupVersionResource{Group:"autoscaling", Version:"v1", Resource:"horizontalpodautoscalers"}:true, schema.GroupVersionResource{Group:"policy", Version:"v1beta1", Resource:"poddisruptionbudgets"}:true, schema.GroupVersionResource{Group:"rbac.authorization.k8s.io", Version:"v1beta1", Resource:"roles"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"deployments"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"ingresses/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"controllerrevisions"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"statefulsets/status"}:true, schema.GroupVersionResource{Group:"authorization.k8s.io", Version:"v1", Resource:"subjectaccessreviews"}:true, schema.GroupVersionResource{Group:"authorization.k8s.io", Version:"v1beta1", Resource:"selfsubjectrulesreviews"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"services"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"networkpolicies"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"deployments/scale"}:true, schema.GroupVersionResource{Group:"rbac.authorization.k8s.io", Version:"v1beta1", Resource:"clusterroles"}:true, schema.GroupVersionResource{Group:"admissionregistration.k8s.io", Version:"v1beta1", Resource:"mutatingwebhookconfigurations"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"replicationcontrollers/status"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"resourcequotas/status"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"services/proxy"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta1", Resource:"deployments/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"deployments"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"componentstatuses"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"pods/attach"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"pods/binding"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"secrets"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"services/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"daemonsets/status"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"configmaps"}:true, schema.GroupVersionResource{Group:"autoscaling", Version:"v2beta1", Resource:"horizontalpodautoscalers"}:true, schema.GroupVersionResource{Group:"rbac.authorization.k8s.io", Version:"v1", Resource:"clusterroles"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"persistentvolumes"}:true, schema.GroupVersionResource{Group:"apiextensions.k8s.io", Version:"v1beta1", Resource:"customresourcedefinitions"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"statefulsets/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"replicasets/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"statefulsets"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta1", Resource:"statefulsets"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"replicasets/scale"}:true, schema.GroupVersionResource{Group:"autoscaling", Version:"v2beta1", Resource:"horizontalpodautoscalers/status"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"namespaces/status"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"pods/exec"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"pods/log"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"pods/portforward"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"daemonsets/status"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"deployments/rollback"}:true, schema.GroupVersionResource{Group:"certificates.k8s.io", Version:"v1beta1", Resource:"certificatesigningrequests"}:true, schema.GroupVersionResource{Group:"apiextensions.k8s.io", Version:"v1beta1", Resource:"customresourcedefinitions/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"daemonsets/status"}:true, schema.GroupVersionResource{Group:"authentication.k8s.io", Version:"v1", Resource:"tokenreviews"}:true, schema.GroupVersionResource{Group:"apiregistration.k8s.io", Version:"v1beta1", Resource:"apiservices/status"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"deployments/scale"}:true, schema.GroupVersionResource{Group:"automationbroker.io", Version:"v1", Resource:"serviceinstances"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"replicationcontrollers/scale"}:true, schema.GroupVersionResource{Group:"rbac.authorization.k8s.io", Version:"v1", Resource:"clusterrolebindings"}:true, schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"storageclasses"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"replicationcontrollers"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"replicasets/scale"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"replicasets"}:true, schema.GroupVersionResource{Group:"authorization.k8s.io", Version:"v1", Resource:"selfsubjectaccessreviews"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"pods/proxy"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"deployments/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"replicasets/scale"}:true, schema.GroupVersionResource{Group:"authentication.k8s.io", Version:"v1beta1", Resource:"tokenreviews"}:true, schema.GroupVersionResource{Group:"certificates.k8s.io", Version:"v1beta1", Resource:"certificatesigningrequests/approval"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"nodes/proxy"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"resourcequotas"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"deployments/status"}:true, schema.GroupVersionResource{Group:"batch", Version:"v1beta1", Resource:"cronjobs"}:true, schema.GroupVersionResource{Group:"rbac.authorization.k8s.io", Version:"v1beta1", Resource:"clusterrolebindings"}:true, schema.GroupVersionResource{Group:"policy", Version:"v1beta1", Resource:"poddisruptionbudgets/status"}:true, schema.GroupVersionResource{Group:"automationbroker.io", Version:"v1", Resource:"jobstates"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"pods/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"replicasets"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"replicasets/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"statefulsets/scale"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"daemonsets"}:true, schema.GroupVersionResource{Group:"events.k8s.io", Version:"v1beta1", Resource:"events"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"podsecuritypolicies"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta1", Resource:"deployments/scale"}:true, schema.GroupVersionResource{Group:"rbac.authorization.k8s.io", Version:"v1", Resource:"roles"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"podtemplates"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"replicasets/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta1", Resource:"deployments/rollback"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta1", Resource:"statefulsets/scale"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"statefulsets/scale"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"namespaces/finalize"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"persistentvolumeclaims"}:true, schema.GroupVersionResource{Group:"authorization.k8s.io", Version:"v1beta1", Resource:"selfsubjectaccessreviews"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"replicationcontrollers"}:true, schema.GroupVersionResource{Group:"autoscaling", Version:"v1", Resource:"horizontalpodautoscalers/status"}:true, schema.GroupVersionResource{Group:"rbac.authorization.k8s.io", Version:"v1beta1", Resource:"rolebindings"}:true, schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1beta1", Resource:"storageclasses"}:true, schema.GroupVersionResource{Group:"automationbroker.io", Version:"v1", Resource:"bundles"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"events"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"limitranges"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"nodes"}:true, schema.GroupVersionResource{Group:"apiregistration.k8s.io", Version:"v1beta1", Resource:"apiservices"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"daemonsets"}:true, schema.GroupVersionResource{Group:"batch", Version:"v1", Resource:"jobs"}:true, schema.GroupVersionResource{Group:"extensions", Version:"v1beta1", Resource:"ingresses"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta2", Resource:"deployments/scale"}:true, schema.GroupVersionResource{Group:"authorization.k8s.io", Version:"v1", Resource:"localsubjectaccessreviews"}:true, schema.GroupVersionResource{Group:"admissionregistration.k8s.io", Version:"v1beta1", Resource:"validatingwebhookconfigurations"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"serviceaccounts"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"controllerrevisions"}:true, schema.GroupVersionResource{Group:"authorization.k8s.io", Version:"v1", Resource:"selfsubjectrulesreviews"}:true, schema.GroupVersionResource{Group:"batch", Version:"v1beta1", Resource:"cronjobs/status"}:true, schema.GroupVersionResource{Group:"rbac.authorization.k8s.io", Version:"v1", Resource:"rolebindings"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"persistentvolumeclaims/status"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1beta1", Resource:"controllerrevisions"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"bindings"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"endpoints"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"pods"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"pods/eviction"}:true, schema.GroupVersionResource{Group:"", Version:"v1", Resource:"replicationcontrollers/scale"}:true, schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"statefulsets"}:true, schema.GroupVersionResource{Group:"certificates.k8s.io", Version:"v1beta1", Resource:"certificatesigningrequests/status"}:true}
But my apiserver log says that clusterservicebroker is stored in "servicecatalog.k8s.io/v1beta1"
Attached screenshot of my apiserver log
I0610 06:56:50.416961 1 run_server.go:65] Creating storage factory I0610 06:56:50.417011 1 run_server.go:103] Completing API server configuration I0610 06:56:50.418016 1 etcd_config.go:89] Created skeleton API server I0610 06:56:50.418030 1 etcd_config.go:100] Installing API groups
I0610 06:56:50.418062 1 storage_factory.go:285] storing {servicecatalog.k8s.io clusterservicebrokers} in servicecatalog.k8s.io/v1beta1, reading as servicecatalog.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"/registry", ServerList:[]string{"http://localhost:2379"}, KeyFile:"", CertFile:"", CAFile:"", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0610 06:56:50.418111 1 storage_factory.go:285] storing {servicecatalog.k8s.io clusterserviceclasses} in servicecatalog.k8s.io/v1beta1, reading as servicecatalog.k8s.io/internal from storagebackend.Config{Type:"", Prefix:"/registry", ServerList:[]string{"http://localhost:2379"}, KeyFile:"", CertFile:"", CAFile:"", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000} I0610 06:56:50.418137 1 storage_factory.go:285] storing {servicecatalog.k8s.io clusterserviceplans} in servicecatalog.k8s.io/v1beta1, reading as servicecatalog.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"/registry", ServerList:[]string{"http://localhost:2379"}, KeyFile:"", CertFile:"", CAFile:"", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000} I0610 06:56:50.418155 1 storage_factory.go:285] storing {servicecatalog.k8s.io serviceinstances} in servicecatalog.k8s.io/v1beta1, reading as servicecatalog.k8s.io/internal from storagebackend.Config{Type:"", Prefix:"/registry", ServerList:[]string{"http://localhost:2379"}, KeyFile:"", CertFile:"", CAFile:"", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000} I0610 06:56:50.418172 1 storage_factory.go:285] storing {servicecatalog.k8s.io servicebindings} in servicecatalog.k8s.io/v1beta1, reading as servicecatalog.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"/registry", ServerList:[]string{"http://localhost:2379"}, KeyFile:"", CertFile:"", CAFile:"", Quorum:true, Paging:true, DeserializationCacheSize:0, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
And I don't see servicecatalog.k8s.io in kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
@jboyd01 Kindly help me understand what i am missing here.
@nivedhan-murugavel Working on getting a new release of our helm chart pushed out (for v0.1.21) which may fix what you are seeing (it has a fix for missing ca certs which are causing healthz checks to fail on the controller manager).
I'll ping back when it's ready to try.
@carolynvs That would really help. Thanks a lot.
When can i expect it to be fixed, any ETA? And is there any beta version of your fix so that I can try?
@nivedhan-murugavel I tested v0.1.21 and am still able to reproduce this behavior. :-( So no fix yet.
On AKS I am seeing that the controller manager deployment is actually healthy, the /healthz liveliness and readiness probes are just so flaky that k8s is restarting the pod over and over. After editing the deployment manually and deleting those probes, service catalog is working fine for me.
If you try that out, I would really appreciate knowing if everything works fine for you, or if there are still problems.
@carolynvs If I remove readiness and liveness probe from apiserver-deployment.yaml and controller-manager-deployment.yaml, my apiserver and controller-manager are running but I am still not seeing servicecatalog.k8s.io/v1beta1
in kubectl api-versions
. And when I run service broker file I am getting this error error: unable to recognize "broker-resource.yaml": no matches for kind "ClusterServiceBroker" in version "servicecatalog.k8s.io/v1beta1"
I believe that servicecatalog.k8s.io/v1beta1
will get added to API list kubectl api-versions
if both apiserver and controller manager are running successfully. Correct me if I am wrong and help me understand what I am missing here
@nivedhan-murugavel can you post the service catalog API Server and Controller Manager logs? i.e.,
kubectl logs -l app=catalog-catalog-controller-manager -n catalog
and
kubectl logs -l app=catalog-catalog-apiserver -n catalog -c apiserver
You might want to put these in a gist, they can be quiet large.
Ok, this doesn't appear to be related to problems with the health check then. Hopefully with the logs, we can figure it out.
@jboyd01 @carolynvs I have added the gist for controller and apiserver logs controller Manager log gist -- https://gist.github.com/nivedhan-murugavel/ac08dcf2e5e6648e748524d052885cdd
Apiserver log gist -- https://gist.github.com/nivedhan-murugavel/98529f61370ffc3f87c782e91b38ad51
Hi @nivedhan-murugavel, what's the output of kubectl get apiservices
?
The APIService
resource tells the aggregator to register your aggregated API. There should be an entry for the service catalog API group. If there isn't, none of the API aggregation stuff is going to work.
Hi @pmorie This is my kubectl get apiservices
NAME AGE
v1. 2d
v1.apps 2d
v1.authentication.k8s.io 2d
v1.authorization.k8s.io 2d
v1.autoscaling 2d
v1.batch 2d
v1.networking.k8s.io 2d
v1.rbac.authorization.k8s.io 2d
v1.storage.k8s.io 2d
v1beta1.admissionregistration.k8s.io 2d
v1beta1.apiextensions.k8s.io 2d
v1beta1.apps 2d
v1beta1.authentication.k8s.io 2d
v1beta1.authorization.k8s.io 2d
v1beta1.batch 2d
v1beta1.certificates.k8s.io 2d
v1beta1.events.k8s.io 2d
v1beta1.extensions 2d
v1beta1.policy 2d
v1beta1.rbac.authorization.k8s.io 2d
v1beta1.servicecatalog.k8s.io 2h
v1beta1.storage.k8s.io 2d
v1beta2.apps 2d
v2beta1.autoscaling 2d
hmmm servicecatalog is in there. API and Controller logs don't have issues that jump out to me at least. (Thanks for formatting @MHBauer!)
actually, @nivedhan-murugavel , this is different from what you posted 2 days ago in https://github.com/kubernetes-incubator/service-catalog/issues/2100#issuecomment-396129317
Can you try $kubectl get clusterservicebrokers
and $kubectl get clusterserviceclasses
@jboyd01 I am really sorry for that confusion. I get this error when i run kubectl get clusterservicebrokers
error: the server doesn't have a resource type "clusterservicebrokers"
Thanks @MHBauer for formatting :)
@nivedhan-murugavel, no problem. 2 days ago you ran that get apiservices command and catalog wasn't present in the output but now it is. Interesting that Kubernetes is saying it doesn't know about clusterservicebrokers though.
@jboyd01 Ya. Because of this I am not able to use aws service broker :(
@jboyd01 Attached the output for kubectl describe apiservices
https://gist.github.com/nivedhan-murugavel/730892748a5c53d5eeee1d77172aba21
Really interesting details from the describe apiservices (timeout):
Name: v1beta1.servicecatalog.k8s.io
Namespace:
Labels: <none>
Annotations: <none>
API Version: apiregistration.k8s.io/v1beta1
Kind: APIService
Metadata:
Creation Timestamp: 2018-06-12T15:21:58Z
Resource Version: 227770
Self Link: /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.servicecatalog.k8s.io
UID: 58ee2df7-6e54-11e8-897d-0291b40d66f8
Spec:
Ca Bundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwemRtTXQKWTJGMExXTmhNQjRYRFRFNE1EWXhNakUxTWpFMU4xb1hEVEk0TURZd09URTFNakUxTjFvd0ZURVRNQkVHQTFVRQpBeE1LYzNaakxXTmhkQzFqWVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS1lKCmg4WGpRdFpnS3YrMzN2NjZneEJJK0VEV2dBOTFmWDJSanpVNjRUbVpMN1RyTFozaVFCdzhFckROVmgzMDQwb3IKS1NjUjdTRkVpRHpKT2p1R2JGeXF2N0NITmVPT3VkVVdLTy9TelpzZXQ4Zks2cU5ZK1RaVzdXYmdPTTAzVmg5cAp1dlYvUFRZK1RwWmx4ZkF2NWtWbDBwUTRyZjZhUExWY21mbFcrUGExS25qSUxKZU9KUlgyeEF5TDBCU1lBSWxCCmxzUGpzV2NWaFpyeHcwZ1ZWUjJ1SW1SRG96K2JYTVJGcm1Xbit2TU0vZFRTWlh6YWRYUWhpT1NzOXZVazVkU1cKRkJGNmRQVG9JOFp0UXZpL0NmRnpPTVAwZkNmZitjMEdydFJPZjlEN2d4L3Z6QkJ2a2ZDZU5Qa1lnK0NqUm0wLwpiY1Y0aUtOOWozUHdYMGdjRUpVQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUIwR0ExVWRKUVFXCk1CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBRXRWTnJaL1U2eGJYSVlnbzR1RHJMcFQ5OHR6a3ViUzlROHlMSkpmTTVUeUZpZW90QQp2aGNzQ1FKN1h4ZW5YOWM0K0lGRlVNelMrTkNOV093VnBBSDkwNjVZbXlJTHhVbERMSTBkeWE5dytudGtDZC8vCnZucmZ2bVNKSks2ZmwyR0pkbHA3YU9pYk12bHkxTEpHcmFrZlJlY2pNVlI3dlFkM3RlTjFBaDhUZ1hNdlROYjMKanM2U3FuM3RZS3JUVzdTWmZpRW43bnZRd0FBU21XMmlobFd4VlJVdmlhaktFc0dGb0NPcHJIeEhoTmEvaDF6YwpmSzJOS1Jsb3ZNK2tSWXorS09iZlVVZGwxWUF3QmIwb25uZC9ERy96KytCSTcyaEV5Ry8xT3pOanUxclZBVXRPCmE2T01hRCtOSzVjaHBycXpwUkdZNmNLZHd3VU05ZmY0aGV1NQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
Group: servicecatalog.k8s.io
Group Priority Minimum: 10000
Service:
Name: catalog-catalog-apiserver
Namespace: catalog
Version: v1beta1
Version Priority: 20
Status:
Conditions:
Last Transition Time: 2018-06-12T15:22:16Z
Message: no response from https://100.96.1.21:8443: Get https://100.96.1.21:8443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Reason: FailedDiscoveryCheck
Status: False
Type: Available
Events: <none>
@nivedhan-murugavel Did you add back the health checks after we figured out that wasn't the problem? I want to make sure that our poking around hasn't masked the root problem.
@carolynvs I didnt add that back. I will add and share the output now
@carolynvs I have added readiness and liveness probe back to apiserver-deployment.yaml and controller-manager.yaml file. Started seeing crashloopback for apiserver and controller manager.
logs for controller-manager after adding reading and liveness probe -- https://gist.github.com/nivedhan-murugavel/25940e2479984d54347260b774708f9b logs for apiserver after adding reading and liveness probe -- https://gist.github.com/nivedhan-murugavel/83d148e671af4d042cf9f541bb019eaf kubectl describe apiservices -- https://gist.github.com/nivedhan-murugavel/fcf9f54a50c81c8f3ee603d462d88d68
Let me know if you need more information. Thanks in advance
healthz check etcd failed: etcd failed to reach any server
Can you run a kubectl describe on service catalog apiserver? Also maybe a kubectl get pods --all-namespaces
to make sure that dns and other key stuff is up.
kubectl describe pod catalog-catalog-apiserver-764dc7dbcd-646v7 -n catalog
Namespace: catalog
Node: ip-172-31-28-242.us-west-2.compute.internal/172.31.28.242
Start Time: Tue, 12 Jun 2018 18:38:35 +0000
Labels: app=catalog-catalog-apiserver
chart=catalog-0.1.21
heritage=Tiller
pod-template-hash=3208738678
release=catalog
releaseRevision=1
Annotations: <none>
Status: Running
IP: 100.96.2.20
Controlled By: ReplicaSet/catalog-catalog-apiserver-764dc7dbcd
Containers:
apiserver:
Container ID: docker://4f8eb9f7fa72721c4fef0bf09ff7ac15e5c7d5f3f4cec125ef5aa9009a70c23f
Image: quay.io/kubernetes-service-catalog/service-catalog:v0.1.21
Image ID: docker-pullable://quay.io/kubernetes-service-catalog/service-catalog@sha256:86ef0800d66ec0638579e4e88bf98a1dcfd4fc596f190c745d486acaae8feb21
Port: 8443/TCP
Host Port: 0/TCP
Args:
apiserver
--enable-admission-plugins
KubernetesNamespaceLifecycle,DefaultServicePlan,ServiceBindingsLifecycle,ServicePlanChangeValidator,BrokerAuthSarCheck
--secure-port
8443
--storage-type
etcd
--etcd-servers
http://localhost:2379
-v
10
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 12 Jun 2018 18:53:07 +0000
Finished: Tue, 12 Jun 2018 18:53:36 +0000
Ready: False
Restart Count: 9
Limits:
cpu: 100m
memory: 30Mi
Requests:
cpu: 100m
memory: 20Mi
Liveness: http-get https://:8443/healthz delay=10s timeout=2s period=10s #success=1 #failure=3
Readiness: http-get https://:8443/healthz delay=10s timeout=2s period=10s #success=1 #failure=1
Environment: <none>
Mounts:
/var/run/kubernetes-service-catalog from apiserver-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from service-catalog-apiserver-token-4fsgg (ro)
etcd:
Container ID: docker://5532ae6656461c165c40146c02312c7eea7ec6e9f188541b608ea8e2416c5a28
Image: quay.io/coreos/etcd:latest
Image ID: docker-pullable://quay.io/coreos/etcd@sha256:5815c7c7040e3dd6f5b18175fc6fb5e526c075c4dfd5cbd01dddb6a62e6e6bf0
Port: 2379/TCP
Host Port: 0/TCP
Command:
/usr/local/bin/etcd
--listen-client-urls
http://0.0.0.0:2379
--advertise-client-urls
http://localhost:2379
State: Running
Started: Tue, 12 Jun 2018 18:38:39 +0000
Ready: True
Restart Count: 0
Limits:
cpu: 100m
memory: 40Mi
Requests:
cpu: 100m
memory: 30Mi
Liveness: http-get http://:2379/health delay=10s timeout=2s period=10s #success=1 #failure=3
Readiness: http-get http://:2379/health delay=10s timeout=2s period=10s #success=1 #failure=1
Environment:
ETCD_DATA_DIR: /etcd-data-dir
Mounts:
/etcd-data-dir from etcd-data-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from service-catalog-apiserver-token-4fsgg (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
apiserver-cert:
Type: Secret (a volume populated by a Secret)
SecretName: catalog-catalog-apiserver-cert
Optional: false
etcd-data-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
service-catalog-apiserver-token-4fsgg:
Type: Secret (a volume populated by a Secret)
SecretName: service-catalog-apiserver-token-4fsgg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned catalog-catalog-apiserver-764dc7dbcd-646v7 to ip-172-31-28-242.us-west-2.compute.internal
Normal SuccessfulMountVolume 18m kubelet, ip-172-31-28-242.us-west-2.compute.internal MountVolume.SetUp succeeded for volume "etcd-data-dir"
Normal SuccessfulMountVolume 18m kubelet, ip-172-31-28-242.us-west-2.compute.internal MountVolume.SetUp succeeded for volume "apiserver-cert"
Normal SuccessfulMountVolume 18m kubelet, ip-172-31-28-242.us-west-2.compute.internal MountVolume.SetUp succeeded for volume "service-catalog-apiserver-token-4fsgg"
Normal Pulling 18m kubelet, ip-172-31-28-242.us-west-2.compute.internal pulling image "quay.io/coreos/etcd:latest"
Normal Started 18m kubelet, ip-172-31-28-242.us-west-2.compute.internal Started container
Normal Pulled 18m kubelet, ip-172-31-28-242.us-west-2.compute.internal Successfully pulled image "quay.io/coreos/etcd:latest"
Normal Created 18m kubelet, ip-172-31-28-242.us-west-2.compute.internal Created container
Normal Pulling 18m (x2 over 18m) kubelet, ip-172-31-28-242.us-west-2.compute.internal pulling image "quay.io/kubernetes-service-catalog/service-catalog:v0.1.21"
Normal Killing 18m kubelet, ip-172-31-28-242.us-west-2.compute.internal Killing container with id docker://apiserver:Container failed liveness probe.. Container will be killed and recreated.
Normal Created 18m (x2 over 18m) kubelet, ip-172-31-28-242.us-west-2.compute.internal Created container
Normal Started 18m (x2 over 18m) kubelet, ip-172-31-28-242.us-west-2.compute.internal Started container
Normal Pulled 18m (x2 over 18m) kubelet, ip-172-31-28-242.us-west-2.compute.internal Successfully pulled image "quay.io/kubernetes-service-catalog/service-catalog:v0.1.21"
Warning Unhealthy 17m (x5 over 18m) kubelet, ip-172-31-28-242.us-west-2.compute.internal Readiness probe failed: HTTP probe failed with statuscode: 500
Warning BackOff 8m (x32 over 16m) kubelet, ip-172-31-28-242.us-west-2.compute.internal Back-off restarting failed container
Warning Unhealthy 3m (x21 over 18m) kubelet, ip-172-31-28-242.us-west-2.compute.internal Liveness probe failed: HTTP probe failed with statuscode: 500```
kubectl get pods --all-namespaces
```NAMESPACE NAME READY STATUS RESTARTS AGE
catalog catalog-catalog-apiserver-764dc7dbcd-646v7 1/2 Running 10 20m
catalog catalog-catalog-controller-manager-58559dbcd5-njhwz 0/1 CrashLoopBackOff 9 20m
kube-system dns-controller-5bfc54fd97-fjcnl 1/1 Running 0 1d
kube-system etcd-server-events-ip-172-31-25-160.us-west-2.compute.internal 1/1 Running 0 1d
kube-system etcd-server-ip-172-31-25-160.us-west-2.compute.internal 1/1 Running 0 1d
kube-system kube-apiserver-ip-172-31-25-160.us-west-2.compute.internal 1/1 Running 0 1d
kube-system kube-controller-manager-ip-172-31-25-160.us-west-2.compute.internal 1/1 Running 2 1d
kube-system kube-dns-7785f4d7dc-hgdzm 3/3 Running 0 2d
kube-system kube-dns-7785f4d7dc-rmnjk 3/3 Running 0 2d
kube-system kube-dns-autoscaler-787d59df8f-nz7fk 1/1 Running 0 2d
kube-system kube-proxy-ip-172-31-23-227.us-west-2.compute.internal 1/1 Running 0 2d
kube-system kube-proxy-ip-172-31-25-160.us-west-2.compute.internal 1/1 Running 0 1d
kube-system kube-proxy-ip-172-31-28-242.us-west-2.compute.internal 1/1 Running 0 2d
kube-system kube-scheduler-ip-172-31-25-160.us-west-2.compute.internal 1/1 Running 2 1d
kube-system tiller-deploy-7ccf99cd64-j49t7 1/1 Running 0 2d```
(x5 over 18m) kubelet, ip-172-31-28-242.us-west-2.compute.internal Readiness probe failed: HTTP probe failed with statuscode: 500
That's the type of stuff (along with the client timeout and EOF errors) that I'm trying to hunt down. Thanks for the logs, that panic is very helpful to me!
That said I'm not sure what's wrong with your installation... On the other clusters, I'm seeing that the health checks are flakey and turning them off lets you use service catalog. But it seems like there is more going on for your cluster.
I have created k8s cluster in aws using kops. Created in specific VPC and subnet. Then tried installing service catalog using helm. Didnt do anything much.
Since I have two worker nodes, my catalog controller manager and catalog apiserver runs in two different nodes. Would that be a problem?
Yes and No. π
Yes, that's the trigger for the healthchecks being flakey, they don't seem to handle being on separate nodes well. That's why I don't see that problem on minikube but do on "real" clusters.
No, AFAIK that is not the cause of your problems with getting apiserver to work. For me it's enough to turn off the health probes to stop k8s from flapping my pods. But that wasn't enough to fix your cluster.
@carolynvs Any update on the above?
If any one using service catalog in the cluster that is created using kops, can you please confirm whether you are able to ping from any pod in master node to catalog APIserver pod that is running in worker node?
Note: My cloud provider is aws
@carolynvs Can you please mention someone you know using k8s cluster created in aws using kops and service catalog integrated with that cluster?
I've got a PR open to make the probes optional in the service catalog chart, https://github.com/kubernetes-incubator/service-catalog/pull/2121. Otherwise I haven't had luck figuring out the root cause yet.
Can you please mention someone you know using k8s cluster created in aws using kops and service catalog integrated with that cluster?
I don't know anyone using aws, or kops.
Thanks @carolynvs. Will try once that PR is merged and ready. But as @jboyd01 said, My master is not able to ping the catalog apiserver and when I do kubectl describe apiservices v1beta1.servicecatalog.k8s.io
I am seeing FailedDiscoveryCheck
Namespace:
Labels: <none>
Annotations: <none>
API Version: apiregistration.k8s.io/v1beta1
Kind: APIService
Metadata:
Creation Timestamp: 2018-06-14T13:26:12Z
Resource Version: 42514
Self Link: /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.servicecatalog.k8s.io
UID: 81e6fc9e-6fd6-11e8-b153-02bc1253425
Spec:
Ca Bundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQVRBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURdfsgfdgWUVFERXdwemRtTXQKWTJGMExXTmhNQjRYRFRFNE1EWXhOREV6TWpZeE1sb1hEVEk0TURZeE1URXpNall4TWxvd0ZURVRNQkVHQTFVRQpBeE1LYzNaakxXTmhkQzFqWVRDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS1dpCjdIQ21KK0FDU0ZKQkF0U1dvZkNDY2lPVXpuT1NrMzNYcjl5dTRLeHhQanJRSC9DeHpBS2t6cks4NkNmZVVHaHMKY0NmWUVucVdpTEQ2bFk0RzNBVVdwd203bTh2dy9jNkR0Qzl4cG5DeGdIYVhpaUlLSmZUWXc1cWhZNVgyemZQSwpNOEY1WTdIdkpES0VjOVJsaVlveHN4NmFLTXQ4VS9Da05JR0M1RHZYbzVja2UrSVVmR0cyZ1UyZ2NhZU41dXd2CkZNeVNBMkxNOHNYUUlnVFJFMTlzeHZZQkNQandlcnRIdnd5K3g0K240SjJDM0RsZFZxSDJkUUF2Ky8zcFRwTzcKcHJlbC9QZlhpdEpwZTN6ZTErRWtTSjhuaU42YWMrVzkySzNpdHVmUlU5TUZMOHJCdE40MXl3N2o5eXNQbEQxNgoxWnFvT2hDWVdvR2tmN1kyRnRjQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUIwR0ExVWRKUVFXCk1CUUdDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBRU5vd2JTOHFyZU5weHJnQ0R2dHFsb0I5THlRVUo1L25FSlA0KzEzazk4NG83K1ZPNApFNFlJTmRuTzYrZmc0TjFUK1RLSC83bWE1YmxrRXJpMVVLZFV3TEd1b294b2FlQ0hxZEZJYTFDdEJHbUlzR05RCjNWelRQWmZySVZCVU1sVFN4aTB3QnlQeU5vYVQxVVZsT1RFclpvMkhkeUhFMDU1UC9iQXFJTDBvNG5ONEFQVGoKYzhpTERrK1Z6cHV2NFJxY0hOZlJaMmpmSDU2VE4wcWl0YWpHWUVsSDdMNTNlSzAxL0s1N08wNEVzUjI3L2FMTgoxdXBEMisvUEZ1RVhXREJzVGdVMUlSK1ZjblNrOWI5MFpJVWdHYkRaRkNLRFRmcHBOQWtUaFB2bE16QlRwZkRzCkZRZVFGQ3lKM0RyVTBjN25ySWdkU2s3UEFRcVIvQjgxZitsQQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
Group: servicecatalog.k8s.io
Group Priority Minimum: 10000
Service:
Name: catalog-catalog-apiserver
Namespace: catalog
Version: v1beta1
Version Priority: 20
Status:
Conditions:
Last Transition Time: 2018-06-14T13:26:12Z
Message: no response from https://100.67.17.174:443: Get https://100.67.17.174:443: dial tcp 100.67.17.174:443: getsockopt: connection refused
Reason: FailedDiscoveryCheck
Status: False
Type: Available
Events: <none>
To be clear there's more going on with your cluster than what I am currently investigating with the probe issues. π
I don't have great ideas on what is wrong however, so I'm hoping that Jay or some other people can help troubleshoot.
@nivedhan-murugavel After syncing with the other maintainers, it's clear from your most recent error messages that this is a problem with your cluster's configuration, and isn't something that we in SIG Service Catalog can help with further.
I am closing this for now and suggest that maybe you follow up with Amazon support, or the k8s slack channels for amazon or kops maybe. This is beyond our ability to assist.
After you get your cluster setup and healthy, I'd suggest a complete reinstall of service catalog. If you are still seeing problems with service catalog after that, let us know.
Hey,
I've run into exactly this issue while deploying service catalog to GKE.
I followed the same troubleshooting steps listed above and when I get to kubectl describe apiservices v1beta1.servicecatalog.k8s.io
I noticed that there was also an error connecting to the APIServer.
Conditions:
Last Transition Time: 2018-08-24T20:52:28Z
Message: no response from https://10.132.0.17:8443: Get https://10.132.0.17:8443: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Reason: FailedDiscoveryCheck
Status: False
Type: Available
After troubleshooting further from another pod, I noticed a TLS error when querying the APIServer:
* Rebuilt URL to: https://10.132.0.17:8443/
* Trying 10.132.0.17...
* TCP_NODELAY set
* Connected to 10.132.0.17 (10.132.0.17) port 8443 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, Server hello (2):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
To workaround this for now, I've set the following values:
controllerManager.apiserverSkipVerify: true
& useAggregator: false
.
Essentially the fix is to set --service-catalog-insecure-skip-verify=true
on the Controller Manager however in the chart you also need to disable use of the Aggregated API for this to work?
@carolynvs Should the ability to disable TLS verification not be dependant on if using the aggregator or not? I'm happy to raise a PR for fixing this, if you feel this makes sense.
Cheers, Tom
have any solution?
I'm having a very similar issue on a cluster deployed to a private VPC connected to an on prem network. I noticed that I am unable to install and get the errrors mentioned above with both a kops and eks cluster on this network topology. I have no issues with any other pods having traffic issues. I was able to install on a standard eks install not attached to this VPC. It appears the underlying network somehow matters with the service catalog. I can't understand why that would matter but it certainly does. Should I look for some underlying network setting? This seems like a problem with the service catalog since all other pods seem to work fine.
Which errors? How are you installing? What values are you setting in helm that differ from the defaults?
On 4/19/19 11:51:21, 2tim wrote:
I'm having a very similar issue on a cluster deployed to a private VPC connected to an on prem network. I noticed that I am unable to install and get the errrors mentioned above with both a kops and eks cluster on this network topology. I have no issues with any other pods having traffic issues. I was able to install on a standard eks install not attached to this VPC. It appears the underlying network somehow matters with the service catalog. I can't understand why that would matter but it certainly does. Should I look for some underlying network setting? This seems like a problem with the service catalog since all other pods seem to work fine.
β You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_kubernetes-2Dincubator_service-2Dcatalog_issues_2100-23issuecomment-2D484985055&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=gSVznMk4DixLVQ8USuNSH0RuT_4psswDPWAf-45M48c&m=e9Q_fL6yJ1tRnDYacvK4VbZTlp3j3KeLby3ygzQR8e8&s=T6l9-IGFD2fbq1AQ9H7MTKQkUk2C2AP7A-SFZMvSZPg&e=, or mute the thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_ADFYGAOSHC2GBONJLOBJ6HTPRIICTANCNFSM4FD55CIQ&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=gSVznMk4DixLVQ8USuNSH0RuT_4psswDPWAf-45M48c&m=e9Q_fL6yJ1tRnDYacvK4VbZTlp3j3KeLby3ygzQR8e8&s=O6mbVvMG4nIH3BuQweU6Th-cplM0SXWwM7Mos--_v2c&e=.
Itβs a default helm install. I was able to talk to someone on the slack channel for sig-service-catalog and I think it has something to do with the pod that holds the api server and Etcd containers. The apiserver is not able to connect to etcd, but only with the private network topology mentioned. This seems strange since the communication is purely between the containers in one pod and not outside of the cluster network.
On Fri, Apr 19, 2019 at 2:13 PM Morgan Bauer notifications@github.com wrote:
Which errors? How are you installing? What values are you setting in helm that differ from the defaults?
On 4/19/19 11:51:21, 2tim wrote:
I'm having a very similar issue on a cluster deployed to a private VPC connected to an on prem network. I noticed that I am unable to install and get the errrors mentioned above with both a kops and eks cluster on this network topology. I have no issues with any other pods having traffic issues. I was able to install on a standard eks install not attached to this VPC. It appears the underlying network somehow matters with the service catalog. I can't understand why that would matter but it certainly does. Should I look for some underlying network setting? This seems like a problem with the service catalog since all other pods seem to work fine.
β You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub < https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_kubernetes-2Dincubator_service-2Dcatalog_issues_2100-23issuecomment-2D484985055&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=gSVznMk4DixLVQ8USuNSH0RuT_4psswDPWAf-45M48c&m=e9Q_fL6yJ1tRnDYacvK4VbZTlp3j3KeLby3ygzQR8e8&s=T6l9-IGFD2fbq1AQ9H7MTKQkUk2C2AP7A-SFZMvSZPg&e= , or mute the thread < https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_ADFYGAOSHC2GBONJLOBJ6HTPRIICTANCNFSM4FD55CIQ&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=gSVznMk4DixLVQ8USuNSH0RuT_4psswDPWAf-45M48c&m=e9Q_fL6yJ1tRnDYacvK4VbZTlp3j3KeLby3ygzQR8e8&s=O6mbVvMG4nIH3BuQweU6Th-cplM0SXWwM7Mos--_v2c&e= .
β You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes-incubator/service-catalog/issues/2100#issuecomment-484989869, or mute the thread https://github.com/notifications/unsubscribe-auth/ABHPPPSWXTVYXKJDL2MTKMTPRIKURANCNFSM4FD55CIQ .
root@master-1:~# kubectl logs -l app=catalog-catalog-controller-manager -n catalog I0603 06:22:24.632755 1 round_trippers.go:444] Response Headers: I0603 06:22:24.632758 1 round_trippers.go:447] Audit-Id: 590bb801-175c-4e17-9a44-cc8ab672f0af I0603 06:22:24.632760 1 round_trippers.go:447] Content-Type: text/plain; charset=utf-8 I0603 06:22:24.632762 1 round_trippers.go:447] X-Content-Type-Options: nosniff I0603 06:22:24.632764 1 round_trippers.go:447] Content-Length: 20 I0603 06:22:24.632766 1 round_trippers.go:447] Date: Mon, 03 Jun 2019 06:21:22 GMT I0603 06:22:24.632777 1 request.go:942] Response Body: service unavailable I0603 06:22:24.632833 1 request.go:1145] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } E0603 06:22:24.632848 1 controller_manager.go:311] Failed to get supported resources from server: the server is currently unable to handle the request F0603 06:22:24.632869 1 controller_manager.go:236] error running controllers: failed to get api versions from server: "timed out waiting for the condition", "the server is currently unable to handle the request"
My controller manager is not working and I am getting this issue: https://github.com/kubernetes-incubator/service-catalog/issues/1867
I am not able to add servicecatalog.k8s.io apiservice to my api list(kubectl api-versions) because of which I am not able to register service broker.