AliyunContainerService / gpushare-device-plugin

GPU Sharing Device Plugin for Kubernetes Cluster
Apache License 2.0
468 stars 144 forks source link

Failed due to invalid configuration: no server found for cluster "local" #7

Open ZhengRongTan opened 5 years ago

ZhengRongTan commented 5 years ago

ERROR: logging before flag.Parse: F0311 14:21:02.271695 238971 podinfo.go:40] Failed due to invalid configuration: no server found for cluster "local" goroutine 1 [running, locked to thread]: github.com/AliyunContainerService/gpushare-device-plugin/vendor/github.com/golang/glog.stacks(0xc42000e000, 0xc420346000, 0x76, 0xc8) /go/src/github.com/AliyunContainerService/gpushare-device-plugin/vendor/github.com/golang/glog/glog.go:769 +0xcf github.com/AliyunContainerService/gpushare-device-plugin/vendor/github.com/golang/glog.(loggingT).output(0x1825a40, 0xc400000003, 0xc420118790, 0x17b61a5, 0xa, 0x28, 0x0) /go/src/github.com/AliyunContainerService/gpushare-device-plugin/vendor/github.com/golang/glog/glog.go:720 +0x32d github.com/AliyunContainerService/gpushare-device-plugin/vendor/github.com/golang/glog.(loggingT).printf(0x1825a40, 0xc400000003, 0x104ecdf, 0x10, 0xc4200dfee8, 0x1, 0x1) /go/src/github.com/AliyunContainerService/gpushare-device-plugin/vendor/github.com/golang/glog/glog.go:655 +0x14b github.com/AliyunContainerService/gpushare-device-plugin/vendor/github.com/golang/glog.Fatalf(0x104ecdf, 0x10, 0xc4200dfee8, 0x1, 0x1) /go/src/github.com/AliyunContainerService/gpushare-device-plugin/vendor/github.com/golang/glog/glog.go:1148 +0x67 main.kubeInit() /go/src/github.com/AliyunContainerService/gpushare-device-plugin/cmd/inspect/podinfo.go:40 +0x1ec main.init.0() /go/src/github.com/AliyunContainerService/gpushare-device-plugin/cmd/inspect/main.go:26 +0x20

ZhengRongTan commented 5 years ago

exec the command : kubectl inspect gpushare, get the above error log

cheyang commented 5 years ago

Looks like the issue is Failed due to invalid configuration: no server found for cluster "local". Please check your kube config. How about running kubectl get nodes?

ZhengRongTan commented 5 years ago

[root@CNSZ22PL0374 .kube]# kubectl get nodes NAME STATUS ROLES AGE VERSION cnsz22pl0475 Ready 194d v1.11.0 slave1 Ready 209d v1.11.0 slave2 Ready 245d v1.11.0 slave3 Ready 171d v1.11.0 slave4 Ready 244d v1.11.0 slave5 Ready 244d v1.11.0 slave6 Ready 108d v1.11.0 slave7 Ready 243d v1.11.0 slave8 Ready 238d v1.11.0

running "kubectl get nodes" , get the above messages .

ZhengRongTan commented 5 years ago

kubeconfig content as below :

apiVersion: v1 kind: Config users: