Closed sriram-kannan-infoblox closed 2 years ago
apiserver-0 0/1 ContainerCreating 0 46m
what's the reason for this creating status? I ecountered this before due to docker pull limit but seems you are not, how about check the reason for why container creating in 46 min? e.g describe the container for additional info?
Good point, i checked the pod and the failure is due to
Warning FailedMount 5m43s (x50 over 91m) kubelet, minikube MountVolume.SetUp failed for volume "front-proxy-ca" : secret "front-proxy-ca" not found
Warning FailedMount 33s (x12 over 86m) kubelet, minikube Unable to attach or mount volumes: unmounted volumes=[front-proxy-ca], unattached volumes=[apiserver-ca front-proxy-ca root-ca serviceaccount-rsa default-token-8xxsj]: timed out waiting for the condition
ok, looks like the ca has issue and to my limited knowledge those ca are created by CAPN directly @christopherhein any insight for further trouble shooting on this?
Looks to me like Minikube doesn't create the certificates in the /etc/kubernetes/pki, unlike kubeadm. Do we need frontend-proxy for the virtualcluster to work?
@sriram-kannan-infoblox this was introduced in https://github.com/kubernetes-sigs/cluster-api-provider-nested/pull/167 , can you make sure you are using latest code and build all of the images/binaries from the latest code?
Hi @gyliu513, I am only following the steps as per the walkthrough demo below and haven't tried to build the image at all. https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md
My query was do we need the aggregate API for virtual cluster to work? I can go back few commits and try out virtual cluster without aggregate API in minikube provided aggregate API change is not a breaking change.
My plan is to try out virtual cluster in minikube first and then take it to actual cluster.
Thanks
@sriram-kannan-infoblox as a workaround, please remove the aggregate api support, check https://github.com/kubernetes-sigs/cluster-api-provider-nested/pull/167/files for how to remove, you only need to update the statefulset for apiserver.
The https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md need some update, as it is not using the latest image.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md need some update, as it is not using the latest image.
Hi @gyliu513 do you have any update on this demo.md? Thanks.
/remove-lifecycle stale
The https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md need some update, as it is not using the latest image.
@vincent-pli I recalled you opened another issue to track this? What is the status?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
The VirtualCluster fail to create the apiserver and the apiserver-0 is in the container creating state
kubectl-vc create -f virtualcluster_1_nodeport.yaml -o vc.kubeconfig
2021/07/28 08:20:03 etcd is ready
cannot find sts/apiserver in ns default-e4d075-vc-sample-1: default-e4d075-vc-sample-1/apiserver is not ready in 120 seconds
kubectl get po -n default-e4d075-vc-sample-1
NAME READY STATUS RESTARTS AGE
apiserver-0 0/1 ContainerCreating 0 46m
etcd-0 1/1 Running 1 47m
What steps did you take and what happened: Followed the step as per the virtual cluster walkthrough demo and all the steps were successful till the Create VirtualCluster.
https://github.com/kubernetes-sigs/cluster-api-provider-nested/blob/main/virtualcluster/doc/demo.md
During the Create VirtualCluster the etcd came up fine but the apiserver-0 stayed in the ContainerCreating state.
What did you expect to happen: Expected to see the apiserver and controller-manager to be in running state.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
): 1.20.2/etc/os-release
): darwin