Open bgrant0607 opened 8 years ago
I think it should default to localhost
, but I agree the message could be way more user-friendly
An example of what we do in OpenShift: https://github.com/openshift/origin/blob/master/pkg/cmd/util/clientcmd/factory.go#L112
It belongs in the factory (since not all client tools will want / need that error) rather than in a more generic spot like cmdutil.CheckErr
Ref kubernetes/kubernetes#23726
I like the OpenShift error messaging more than what we have now. Does it make sense to point the user towards solutions for populating the config for turnkey solutions? Perhaps as an entry under kubectl config --help
?
kubectl config would be a good place (until Brian gets someone to move it) - I expect users to look there first.
Hi, I'm new here and would like to work on this!
Would the following be a good way to tackle this issue?
Suggest the user to run kubectl config --help
when the error occurs.
Add a few examples for kubectl config --help
. Something like: https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh#L743. However, examples are already present for subcommands (eg. kubectl config set-cluster --help
). So would it make sense to do this?
@nikinath Thanks. Please email kubernetes-sig-cli@googlegroups.com.
cc @kubernetes/sig-cli-feature-requests
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle rotten
/remove-lifecycle stale
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
This issue is now hitting downstream projects (e.g. Istio). See this Istio discuss thread. The user didn't have a ~/.kube/config and got this inscrutable message:
Failed to wait for resources ready: Get http://localhost:8080/api/v1/namespaces/istio-system: dial tcp [::1]:8080: connect: connection refused
Fortunately the user noticed the localhost:8080 in there and was able to proceed.
I have the perception that localhost connections are very likely to succeed the first time they are queried (as opposed to off-host connections).
I'd propose that when kubectl (or other tools, perhaps?) specifically detects that:
It should output an error or warning saying something like:
Unable to connect to Kubernetes at localhost:8080. It's likely that you are trying to reach a different cluster, please configure it in a ~/.kube/config file.
Maybe it should just always print a warning when there's no .kube/config file. Warning: You have no ~/.kube/config; assuming a local cluster in insecure mode (localhost:8080). Please make a ~/.kube/config file to ensure you're talking to the cluster you intend to talk to.
Personally I think it'd be better to refuse to connect without an explicit destination, but almost certainly that'd be "fixing" a load-bearing bug.
did anybody find solution? run in fedora, somebody provide a sample /.kube/config file. plz
This is a kubectl issue /transfer kubectl
Could we deprecate not having a kubeconfig but still retain the (deprecated) existing behavior?
If we do that, we could output this for a failed connection:
Warning: no Kubernetes client configuration found. The kubectl tool was not able to find a configuration for connecting to your Kubernetes cluster. Previous versions of Kubernetes supported trying to connect without a Kubernetes configuration. The tool can still use defaults to connect, but this is deprecated since Kubernetes 1.42 due to concerns about security. Additionally, when kubectl tried to use default values to connect, the connection to localhost on port 8080 timed out after 10 seconds. To learn how to configure kubectl, visit https://k8s.io/whatever
or, if the connection worked OK, something like:
Warning: no Kubernetes client configuration found. Using defaults. Warning: insecure connection to Kubernetes control plane (no TLS). NAME READY STATUS RESTARTS AGE LABELS nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
(bold: stderr; normal weight: stdout)
I don't remember which parts of this issue were in the client library vs kubectl, but it sounds reasonable to me.
BTW, we need a bot to notify the current people involved in the relevant SIG or something. Almost certainly these old issues don't have the right/best people subscribed to them.
I don't know about a bot, but I already mentioned it in Slack.
It's probably a good time to pull the trigger on adding a better warning here.
/triage accepted /assign @ShivamTyagi12345
Could we deprecate not having a kubeconfig but still retain the (deprecated) existing behavior?
If we do that, we could output this for a failed connection:
Warning: no Kubernetes client configuration found.
The kubectl tool was not able to find a configuration for connecting to your Kubernetes cluster. Previous versions of Kubernetes supported trying to connect without a Kubernetes configuration. The tool can still use defaults to connect, but this is deprecated since Kubernetes 1.42 due to concerns about security.
Additionally, when kubectl tried to use default values to connect, the connection to localhost on port 8080 timed out after 10 seconds.
To learn how to configure kubectl, visit https://k8s.io/whatever
or, if the connection worked OK, something like:
Warning: no Kubernetes client configuration found. Using defaults. Warning: insecure connection to Kubernetes control plane (no TLS). NAME READY STATUS RESTARTS AGE LABELS nginx-deployment-75675f5897-7ci7o 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 nginx-deployment-75675f5897-kzszj 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453 nginx-deployment-75675f5897-qqcnn 1/1 Running 0 18s app=nginx,pod-template-hash=3123191453
(bold: stderr; normal weight: stdout)
I will go ahead and add this Warning message as i feel that it has the consensus, @sftim can you please let me know which file location would be required to change inorder to display this warning Thanks
Sorry - I'm not actually someone who codes in Go very often.
This issue is labeled with priority/important-soon
but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
/triage accepted
(org members only)/priority important-longterm
or /priority backlog
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
At minimum, we need to improve the error message:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
We might also want to consider not using localhost as the default, though we want to keep it aligned with our single-node dev cluster plans. kubernetes/kubernetes#24106
cc @kubernetes/kubectl @dlorenc @vishh