Closed vk496 closed 5 years ago
Yes, my bad, the documentation needs to be updated. The cluster name in the service account import shouldn't be $CLUSTER2 (the kubeconfig context name), but "cluster2" in your case (the kubeconfig cluster name).
kubemcsa bootstrap used to use kubeconfig context names as cluster names. Since v0.5.0, it uses kubeconfig cluster names, as you can see in the log of kubemcsa bootstrap
. I'll update the doc ASAP.
Hello,
I've tried again with exactly the same result. I paste you here the full commands executed (for easy copy-paste):
#Env variables
export CLUSTER1=cluster1
export CLUSTER2=cluster2
export CONTEXT1=my-cluster-$CLUSTER1
export CONTEXT2=my-cluster-$CLUSTER2
#Create clusters
kind create cluster --name $CLUSTER1
kind create cluster --name $CLUSTER2
# Fix contexts names
for c in $(kind get clusters); do
dir_k8s=$(kind get kubeconfig-path --name="$c")
file=$(basename $dir_k8s)
prefix=$(docker run --rm -v ~/.kube:/workdir -i mikefarah/yq yq r $file 'contexts[0].context.user')
docker run --rm -v ~/.kube:/workdir -i mikefarah/yq yq w -i $file 'contexts[0].context.user' "$prefix-$c"
docker run --rm -v ~/.kube:/workdir -i mikefarah/yq yq w -i $file 'users[0].name' "$prefix-$c"
sed -i "s/kubernetes-admin@/my-cluster-/g" $dir_k8s
done
export KUBECONFIG="$(for c in $(kind get clusters); do echo -n $(kind get kubeconfig-path --name="$c"):; done)"
#Verify
kubectl config get-contexts
# Step 1
RELEASE_URL=https://github.com/admiraltyio/multicluster-service-account/releases/download/v0.5.1
MANIFEST_URL=$RELEASE_URL/install.yaml
kubectl apply -f $MANIFEST_URL --context $CONTEXT1
#Wait before continue
kubectl wait deployment.apps/service-account-import-controller --for condition=available -n multicluster-service-account --context $CONTEXT1 --timeout 60s
kubemcsa bootstrap --target-context $CONTEXT1 --source-context $CONTEXT2
# Step2
kubectl config use-context $CONTEXT2
kubectl create serviceaccount pod-lister
kubectl create role pod-lister --verb=list --resource=pods
kubectl create rolebinding pod-lister --role=pod-lister \
--serviceaccount=default:pod-lister
kubectl run nginx --image nginx
kubectl config use-context $CONTEXT1
kubectl label namespace default multicluster-service-account=enabled
cat <<EOF | kubectl create -f -
apiVersion: multicluster.admiralty.io/v1alpha1
kind: ServiceAccountImport
metadata:
name: $CLUSTER2-default-pod-lister
spec:
clusterName: $CLUSTER2
namespace: default
name: pod-lister
---
apiVersion: batch/v1
kind: Job
metadata:
name: multicluster-client
spec:
template:
metadata:
annotations:
multicluster.admiralty.io/service-account-import.name: $CLUSTER2-default-pod-lister
spec:
restartPolicy: Never
containers:
- name: multicluster-client
image: multicluster-service-account-example-multicluster-client:latest
EOF
And what I get:
$ kubectl get all -A --context $CONTEXT1
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-5c98db65d4-czgm2 1/1 Running 0 16m
kube-system pod/coredns-5c98db65d4-dq4jx 1/1 Running 0 16m
kube-system pod/etcd-cluster1-control-plane 1/1 Running 0 16m
kube-system pod/kindnet-cb6qj 1/1 Running 1 16m
kube-system pod/kube-apiserver-cluster1-control-plane 1/1 Running 0 16m
kube-system pod/kube-controller-manager-cluster1-control-plane 1/1 Running 0 16m
kube-system pod/kube-proxy-jdn4w 1/1 Running 0 16m
kube-system pod/kube-scheduler-cluster1-control-plane 1/1 Running 0 16m
multicluster-service-account-webhook pod/service-account-import-admission-controller-59db985487-rwrxz 1/1 Running 1 9m56s
multicluster-service-account pod/service-account-import-controller-769c6bd86-9pzgb 0/1 CrashLoopBackOff 6 7m16s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 17m
multicluster-service-account-webhook service/service-account-import-admission-controller ClusterIP 10.103.124.154 <none> 443/TCP 9m20s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kindnet 1 1 1 1 1 <none> 17m
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 beta.kubernetes.io/os=linux 17m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2/2 2 2 17m
multicluster-service-account-webhook deployment.apps/service-account-import-admission-controller 1/1 1 1 9m57s
multicluster-service-account deployment.apps/service-account-import-controller 0/1 1 0 9m57s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-5c98db65d4 2 2 2 16m
multicluster-service-account-webhook replicaset.apps/service-account-import-admission-controller-59db985487 1 1 1 9m57s
multicluster-service-account replicaset.apps/service-account-import-controller-769c6bd86 1 1 0 7m16s
multicluster-service-account replicaset.apps/service-account-import-controller-7c4c8b55d8 0 0 0 9m57s
NAMESPACE NAME COMPLETIONS DURATION AGE
default job.batch/multicluster-client 0/1 3m25s
$ kubectl logs -n multicluster-service-account service-account-import-controller-769c6bd86-9pzgb
2019/11/05 13:03:04 Get https://127.0.0.1:36059/api?timeout=32s: dial tcp 127.0.0.1:36059: connect: connection refused
Thank you for your detailed report. I should have spotted the second problem in your first post.
In your setup, the service-account-import-controller pod in cluster 1 is trying to call the Kubernetes API of cluster 2 at 127.0.0.1:36059, but 127.0.0.1 is the loop-back IP, i.e., the IP reserved for the pod to call itself, and there's nothing running or exposed inside the pod at that address.
127.0.0.1:36059 is the address of the Kubernetes API of cluster 2 from your machine (exposed by kind for your convenience). What you need is the address from any pod in cluster 1. Luckily, kind uses the same Docker bridge network for both clusters, so pods can call the other cluster's kind container, where the Kubernetes API is also exposed.
You can extract the required address with kind get kubeconfig --name $CLUSTER2 --internal
. Multicluster-service-account uses kind for its end-to-end test, so feel free to use that as a reference: https://github.com/admiraltyio/multicluster-service-account/tree/master/test/e2e
Thanks for the help!
Yes, that was the problem. Just tried and worked like a charm.
Kind already contemplated that problem: https://github.com/kubernetes-sigs/kind/issues/950
Hello,
I've tried reproduce the example of README.md without success.
Environment:
Reproduce:
Any idea of what is happening? Thanks in advance