Open kidiyoor opened 6 years ago
Yes - getting this same issue.
@kidiyoor did you manage to get around this issue?
This almost looks like it can't load the kubeconfig. I'm surprised there aren't more people with this issue
I'm getting the same issue, however it feels to me that kubewatch
is reading kubeconfig correctly
$ sudo sysdig proc.name=kubewatch
...
...
170545 15:15:03.776075489 2 kubewatch (32067) < stat res=0 path=/home/snebel/.kube/config
170546 15:15:03.776077937 2 kubewatch (32067) > openat
170547 15:15:03.776082957 2 kubewatch (32067) < openat fd=5(<f>/home/snebel/.kube/config) dirfd=-100(AT_FDCWD) name=/home/snebel/.kube/config flags=4097(O_RDONLY|O_CLOEXEC) mode=0
170548 15:15:03.776085231 2 kubewatch (32067) > epoll_ctl
170549 15:15:03.776085691 2 kubewatch (32067) < epoll_ctl
170550 15:15:03.776086377 2 kubewatch (32067) > epoll_ctl
170551 15:15:03.776086677 2 kubewatch (32067) < epoll_ctl
170552 15:15:03.776089973 2 kubewatch (32067) > fstat fd=5(<f>/home/snebel/.kube/config)
170553 15:15:03.776090740 2 kubewatch (32067) < fstat res=0
170554 15:15:03.776096612 2 kubewatch (32067) > read fd=5(<f>/home/snebel/.kube/config) size=60580
170555 15:15:03.776121373 2 kubewatch (32067) < read res=60068 data=apiVersion: v1.clusters:.- cluster:. certificate-authority-data: LS0tLS1CRUdJ
170556 15:15:03.776123490 2 kubewatch (32067) > read fd=5(<f>/home/snebel/.kube/config) size=512
170557 15:15:03.776124238 2 kubewatch (32067) < read res=0 data=
170558 15:15:03.776127353 2 kubewatch (32067) > close fd=5(<f>/home/snebel/.kube/config)
170559 15:15:03.776128478 2 kubewatch (32067) < close res=0
...
...
I've tried into two of the kubernetes clusters that I have with very different versions (1.6.6
vs 1.9.7
) and the panic only shows in the 1.9.7
version, feels like if some of the vendor dependencies (apimachinery?) needs to be upgraded for newer API versions
This kubernetes version panic
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.1", GitCommit:"1dc5c66f5dd61da08412a74221ecc79208c2165b", GitTreeState:"clean", BuildDate:"2017-07-14T02:00:46Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.7-gke.6", GitCommit:"9b635efce81582e1da13b35a7aa539c0ccb32987", GitTreeState:"clean", BuildDate:"2018-08-16T21:33:47Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}
This kubernetes version don't panic
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.1", GitCommit:"1dc5c66f5dd61da08412a74221ecc79208c2165b", GitTreeState:"clean", BuildDate:"2017-07-14T02:00:46Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.6", GitCommit:"7fa1c1756d8bc963f1a389f4a6937dc71f08ada2", GitTreeState:"clean", BuildDate:"2017-06-16T18:21:54Z", GoVersion:"go1.7.6", Compiler:"gc", Platform:"linux/amd64"}
After some digging found that the panic was caused by https://github.com/bitnami-labs/kubewatch/blob/5772afdd620a4b2bfe26f71e2dfb6d6a96b076cb/pkg/utils/k8sutil.go#L47 returning an un-handled error
panic: No Auth Provider found for name "gcp"
Which was ultimately related to client-go
auth plugins not being explicitly imported to be available, and making kubewatch to fail when trying to use through https://github.com/bitnami-labs/kubewatch/blob/e8eec939953748c415a40e0dee4c5123eb98679a/pkg/utils/k8sutil.go#L32 And using managed cloud vendors clusters contexts such as gcp, I've provided a proof of concept fix including all available auth plugins in https://github.com/bitnami-labs/kubewatch/pull/140
Anybody seen this error before ?
~/.kube/config is configured correctly.