Closed s-tokutake closed 4 years ago
Hi, cluster add
takes cluster API information from your K8s client configuration. Most likely, this is set to localhost. The error message you see comes from the argocd-server
pod, who obviously cannot connect to any K8s API at 127.0.0.1:32768
.
To solve this, modify the context in your ~/.kube/config
to point to an IP reachable from within your Docker desktop K8s (and possibly reconfigure your K3s or minikube API server to listen not only on localhost.
@jannfis thanks for your advice. I'll try it.
Hi,
cluster add
takes cluster API information from your K8s client configuration. Most likely, this is set to localhost. The error message you see comes from theargocd-server
pod, who obviously cannot connect to any K8s API at127.0.0.1:32768
.To solve this, modify the context in your
~/.kube/config
to point to an IP reachable from within your Docker desktop K8s (and possibly reconfigure your K3s or minikube API server to listen not only on localhost.
I wonder if that statement holds true also for the /version call, as the log shows that some kubectl api calls could reach the cluster.
I have a similar problem where I'm behind a proxy, I do configure the
proxy-url
inside .kubectl/config context which is used for the argocd cluster add, but maybe the version calls seems to ignore the proxy config of the .kubectl/config context?
I have private clusters accessible via IAP through a bastion proxy, so I also specify proxy-url and am running into this error.
I have tried to above work around, but none is working for me
Executing on local cluster
argocd cluster add kind-dev-cluster --insecure
Error
WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `kind-dev-cluster` with full cluster level admin privileges. Do you want to continue [y/N]? y
INFO[0001] ServiceAccount "argocd-manager" already exists in namespace "kube-system"
INFO[0001] ClusterRole "argocd-manager-role" updated
INFO[0001] ClusterRoleBinding "argocd-manager-role-binding" updated
FATA[0002] rpc error: code = Unknown desc = Get "https://127.0.0.1:62826/version?timeout=32s": dial tcp 127.0.0.1:62826: connect: connection refused
we are able to connect https://127.0.0.1:62826/version?timeout=32s
as this is not authenticate, here is the argocd server error
time="2022-01-13T08:07:10Z" level=info msg="received unary call /version.VersionService/Version" grpc.method=Version grpc.request.claims="{\"exp\":1642146356,\"iat\":1642059956,\"iss\":\"argocd\",\"jti\":\"92238083-20f4-4bda-81da-b9061edadbd8\",\"nbf\":1642059956,\"sub\":\"admin\"}" grpc.request.content= grpc.service=version.VersionService grpc.start_time="2022-01-13T08:07:10Z" span.kind=server system=grpc
time="2022-01-13T08:07:10Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=Version grpc.service=version.VersionService grpc.start_time="2022-01-13T08:07:10Z" grpc.time_ms=5.335 span.kind=server system=grpc
time="2022-01-13T08:07:10Z" level=error msg="finished unary call with code Unknown" error="Get \"https://127.0.0.1:62826/version?timeout=32s\": dial tcp 127.0.0.1:62826: connect: connection refused" grpc.code=Unknown grpc.method=Create grpc.service=cluster.ClusterService grpc.start_time="2022-01-13T08:07:10Z" grpc.time_ms=2.614 span.kind=server system=grpc
However API server require authentication, and give this error while unauthenticated connection
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
@jannfis / @jessesuen , I am using Kind cluster. This issue is not resolved yet
hello @sumitnagal would like to know whether you fixed this problem since I've exactly the same on my Mac.
Same for me using minikube on wsl2. Any idea please ?
Same for me on k3d on Mac. Any idea please ?
Workaround for kind, might work for other single node k8s solutions:
kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 172.19.0.2:6443 38m
Find the entry belonging to the cluster in your .kube/config
, and change the server
entry:
- cluster:
certificate-authority-data: ...
server: https://172.19.0.2:6443
name: yourcontext
Verify that kubectl get pods
is still working, then try argocd cluster add
.
I was about to try the same but saw your comment first. Worked like a charm. Thanks, @norbertkeri
You can also use the --in-cluster
flag
❯ argocd cluster add rancher-desktop --label environment=dev --insecure -y
INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system"
INFO[0000] ClusterRole "argocd-manager-role" updated
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" updated
FATA[0000] rpc error: code = Unknown desc = Get "https://127.0.0.1:6443/version?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused
❯ argocd cluster add rancher-desktop --label environment=dev --insecure --in-cluster -y --upsert
INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system"
INFO[0000] ClusterRole "argocd-manager-role" updated
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" updated
Cluster 'https://kubernetes.default.svc' added
EDIT: the above doesn't work when it comes to deploying but allows the cluster to be added :)
Kind does have the kind get kubeconfig --internal --name <cluster name>
command.
For anyone using minikube on linux. I was able to make it work using kvm2 and forcing the VMs to be on the same network.
minikube start -p argocd --network default --driver=kvm2
minikube start -p target --network default --driver=kvm2
You can also use the
--in-cluster
flag❯ argocd cluster add rancher-desktop --label environment=dev --insecure -y INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system" INFO[0000] ClusterRole "argocd-manager-role" updated INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" updated FATA[0000] rpc error: code = Unknown desc = Get "https://127.0.0.1:6443/version?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused ❯ argocd cluster add rancher-desktop --label environment=dev --insecure --in-cluster -y --upsert INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system" INFO[0000] ClusterRole "argocd-manager-role" updated INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" updated Cluster 'https://kubernetes.default.svc' added
EDIT: the above doesn't work when it comes to deploying but allows the cluster to be added :)
Kind does have the
kind get kubeconfig --internal --name <cluster name>
command.
This works great with minikube. Thanks!
after spending many hours to fix it in my local Kind clusters, it turned out using private IP in cluster's apiServerAddress fix the problem as it makes kube apiserver accessible by other local clusters, usually private IP at the bottom of ifconfig | grep inet
I experienced these symptoms due to firewall restrictions that prevented the ArgoCD service from accessing the new K8s cluster's API endpoint. Maybe the ArgoCD CLI should be more explicit in what exactly failed (e.g. "Attempt to reach the target cluster from ArgoCD failed").
for those who are experiencing the issue while running kind clusters on mac, I have created this repository: https://github.com/akram/docker-argo-oc-kind
It builds an image with docker, argo and kubectl, and it documents how to change the ~/.kube/config to make clusters addable to argo
You can also use the
--in-cluster
flag❯ argocd cluster add rancher-desktop --label environment=dev --insecure -y INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system" INFO[0000] ClusterRole "argocd-manager-role" updated INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" updated FATA[0000] rpc error: code = Unknown desc = Get "https://127.0.0.1:6443/version?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused ❯ argocd cluster add rancher-desktop --label environment=dev --insecure --in-cluster -y --upsert INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system" INFO[0000] ClusterRole "argocd-manager-role" updated INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" updated Cluster 'https://kubernetes.default.svc' added
EDIT: the above doesn't work when it comes to deploying but allows the cluster to be added :) Kind does have the
kind get kubeconfig --internal --name <cluster name>
command.This works great with minikube. Thanks!
Also worked with kind cluster
You can also use the
--in-cluster
flag❯ argocd cluster add rancher-desktop --label environment=dev --insecure -y INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system" INFO[0000] ClusterRole "argocd-manager-role" updated INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" updated FATA[0000] rpc error: code = Unknown desc = Get "https://127.0.0.1:6443/version?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused ❯ argocd cluster add rancher-desktop --label environment=dev --insecure --in-cluster -y --upsert INFO[0000] ServiceAccount "argocd-manager" already exists in namespace "kube-system" INFO[0000] ClusterRole "argocd-manager-role" updated INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" updated Cluster 'https://kubernetes.default.svc' added
EDIT: the above doesn't work when it comes to deploying but allows the cluster to be added :)
Kind does have the
kind get kubeconfig --internal --name <cluster name>
command.
thx, its work successful with kind ! then, Can you explain command arguments to me?
Checklist:
argocd version
.Describe the bug
argocd cluster add fails.
To Reproduce
kubectl port-forward svc/argocd-server -n argocd 8080:443
argocd cluster add minikube --insecure
. logs are below.Expected behavior
Add cluster succeeds.
Version