Open toolbar23 opened 5 years ago
tried 0.12.0-rc3 today with similar results.
first execution gave
pm$ ./argocd cluster add rz01
INFO[0000] ServiceAccount "argocd-manager" already exists
INFO[0000] ClusterRole "argocd-manager-role" updated
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" already exists
FATA[0020] Failed to establish connection to 127.0.0.1:8080: context deadline exceeded
and then always
pm$ ./argocd cluster add rz01
INFO[0000] ServiceAccount "argocd-manager" already exists
INFO[0000] ClusterRole "argocd-manager-role" updated
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" already exists
FATA[0000] rpc error: code = Unknown desc = REST config invalid: the server has asked for the client to provide credentials
It appears that token auth to rancher clusters may be a bit special/different. I believe you may be hitting the following issue: https://github.com/rancher/rancher/issues/14997
Can you read through that issue and see if the discussion applies to your situation?
I'm also currently working through the same problem, with adding non-rancher clusters. FATA[0000] rpc error: code = Unknown desc = REST config invalid:
no such host
. I've had success adding clusters in the past, perhaps it is breaking because I am now using microk8s
?
I'm also currently working through the same problem, with adding non-rancher clusters.
FATA[0000] rpc error: code = Unknown desc = REST config invalid:
no such host
. I've had success adding clusters in the past, perhaps it is breaking because I am now usingmicrok8s
?
My problem was not with argo, but with coredns
. I had to edit the configmap to add forward
statements for an internal corporate domain.
Any updates on this one? Facing the same issue.
So when reading trough the referenced issue
... but containing a token Rancher has no knowledge of.. .... ... removes a layer of protection and introduces direct exposure of all clusters to arbitrary requests from anyone that can reach the server container....
We have exactly this problem, we don't want to proxy
the requests thus it would be nice if one could provide the account which is used to connect to kubernetes.
@jessesuen do you think argocd could be changed in a way so that it could interact with ranger as well. I had a look at the code but go is new to me - currently this issue is blocking me from further evaluating argocd for our environment :cry:
Any updates on this issue? We are facing the same issue.
What could really help this along is someone with access and experience to Rancher to review the issues @jessesuen mentioned and investigate.
I am pretty sure this is resolved if one chooses to use the Authorized Cluster endpoint and a generated API key.
I am planning to test ArgoCD in the coming days.
I just verified that ArgoCD works correctly when using an Authorized Cluster Endpoint with a Cluster Scoped API key.
@LinAnt May I know the detail procedures for how to use authorized cluster endpoint with API key?
Confirmed that authorized cluster endpoint can be one of the alternative solution. However, it will be good if Argo CD can directly support Rancher cluster.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Still interested in a fix for this
Interested in a real fix, not this alternate strange fix that bypasses Rancher to access directly the cluster.
INFO[0000] ClusterRole "argocd-manager-role" updated
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" already exists
FATA[0001] rpc error: code = Unknown desc = REST config invalid: the server has asked for the client to provide credentials```
What do you mean with real fix? Authorized cluster endpoints literally is the way Rancher recommends handling these things? I am not following.
A real fix to me is just run argocd cluster add
Maybe this documentation session could help Rancher users to go ahead without the need to digging around the solution, but native Rancher support with just the command to add the cluster is what I call real fix.
There is a workaround without having to enable authorized endpoints. You can use this script:
https://gist.githubusercontent.com/superseb/f6cd637a7ad556124132ca39961789a4/raw/a833ce5548eded9b110f1b5d4dc1896562338975/get_kubeconfig_custom_cluster_rancher2.sh
Get the local admin kubeconfig for the Rancher managed custom cluster, then use ArgoCD CLI to add the cluster using the kubeconfig. You will likely need to edit the cluster secret created after though as it created the cluster with the name "local". You just need to echo name_you_want | base64
and replace the name value in the secret.
Anyone still looking for a clean solution that doesn't involve doing stuff with the local admin account (no pun intended 😄 ): https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80
To summarize:
argocd cluster add
creates a service account (plus token) in the target cluster and uses that as a bearer token credential while constructing the kubeconfig used to interact with the target.argocd cluster add
work the kubeconfig context must specify the downstream API endpoint in the server
option.argocd
CLI cannot be used but the user can create the required secret resource directly using kubectl
.Suggestion: It would be great if argocd cluster add
would support specifying a bearer token that was created out of band, e.g. a service account token or Rancher API token.
Also per https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/rke-clusters/options/#authorized-cluster-endpoint "The authorized cluster endpoint only works on Rancher-launched Kubernetes clusters. In other words, it only works in clusters where Rancher used RKE to provision the cluster. It is not available for clusters in a hosted Kubernetes provider, such as Amazon’s EKS." So that doesn't work with hosted clusters.
Once Rancher Provisioner issue https://github.com/rancher/rancher/issues/38053 is resolved, an operator such as https://github.com/dntosas/capi2argo-cluster-operator should be able to fully automate this manual operation.
can we close this issue now?
We want to use ArgoCD with two clusters created using Rancher RKE (running on our own hardware).
ArgoCD is running on Cluster "int01". Deployments within the same cluster work fine.
When we try to add a second cluster "rz01" via the CLI we receive an error:
In our Rancher log files we see
And the argocd-server has this information regarding the failed request