Open mazocode opened 3 years ago
I think the ClusterVersion crd schema introduces the problem. It is not a good idea to put StatefulSet schema directly in CRD Spec, we should define Pod spec instead.
To quickly workaround the problem, please change the ClusterVersion CRD schema to a much simpler version shown as below. I hope this can unblock you at least for now.
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: null
labels:
controller-tools.k8s.io: "1.0"
name: clusterversions.tenancy.x-k8s.io
spec:
group: tenancy.x-k8s.io
names:
kind: ClusterVersion
plural: clusterversions
scope: Cluster
validation:
openAPIV3Schema:
properties:
apiVersion:
type: string
kind:
type: string
metadata:
type: object
spec:
properties:
apiServer:
properties:
metadata:
type: object
service:
type: object
statefulset:
type: object
type: object
controllerManager:
properties:
metadata:
type: object
service:
type: object
statefulset:
type: object
type: object
etcd:
properties:
metadata:
type: object
service:
type: object
statefulset:
type: object
type: object
type: object
status:
type: object
type: object
version: v1alpha1
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []
Another attempt is to remove your controller-gen, the make script will download controller-gen0.3.0 which seems to be working fine previously.
You can check the make file to see more tricks for manipulating the crd, e.g:
# To work around a known controller gen issue
# https://github.com/kubernetes-sigs/kubebuilder/issues/1544
ifeq (, $(shell which yq))
@echo "Please install yq for yaml patching. Get it from here: https://github.com/mikefarah/yq"
@exit
else
@{ \
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.apiServer.properties.statefulset.properties.spec.properties.template.properties.spec.properties.containers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.controllerManager.properties.statefulset.properties.spec.properties.template.properties.spec.properties.containers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.etcd.properties.statefulset.properties.spec.properties.template.properties.spec.properties.containers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.apiServer.properties.statefulset.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.controllerManager.properties.statefulset.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.etcd.properties.statefulset.properties.spec.properties.template.properties.spec.properties.initContainers.items.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.apiServer.properties.service.properties.spec.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.controllerManager.properties.service.properties.spec.properties.ports.items.required[1]" protocol;\
yq w -i config/crds/tenancy.x-k8s.io_clusterversions.yaml "spec.validation.openAPIV3Schema.properties.spec.properties.etcd.properties.service.properties.spec.properties.ports.items.required[1]" protocol;\
}
endif
Same result with controller-gen 0.3.0. I think the x-kubernetes-list-map-keys were added in 0.3.0 but at this time there was no validation in place. However, your workaround fixed the issue. Here is my first virtual cluster:
$ kubectl -n default-c16bb7-vc-sample-1 get all
NAME READY STATUS RESTARTS AGE
pod/apiserver-0 1/1 Running 0 5m26s
pod/controller-manager-0 1/1 Running 0 4m59s
pod/etcd-0 1/1 Running 0 5m50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/apiserver-svc NodePort 10.90.147.83 <none> 6443:30133/TCP 5m26s
service/etcd ClusterIP None <none> <none> 5m50s
NAME READY AGE
statefulset.apps/apiserver 1/1 5m26s
statefulset.apps/controller-manager 1/1 5m
statefulset.apps/etcd 1/1 5m50s
Is there a way to enforce a specific runtimeClassName for pods with the syncer? This woudl be great to enforce tolerations and a container runtime like kata for pods running on the super cluster.
Forgot a make manifests
... works fine with controller-gen 0.3.0 and the workaround too :)
Is there a way to enforce a specific runtimeClassName for pods with the syncer? This woudl be great to enforce tolerations and a container runtime like kata for pods running on the super cluster.
If vPod specifies runtimeClassName to Kata, it should work. If you want to enforce/overwrite vPod runtimeClassTime to be fixed to Kata, you need to change the syncer code.
/retitle š Unable to create a VirtualCluster on k8 v1.20.2
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /lifecycle frozen
We have another issue creating VirtualCluster in 1.20, where we have apiserver v1.19:
{"level":"error","ts":1664969234.2211943,"logger":"controller-runtime.manager.controller.virtualcluster","msg":"Reconciler error","reconciler group":"tenancy.x-k8s.io","reconciler kind":"VirtualCluster","name":"test","namespace":"default","error":"VirtualCluster.tenancy.x-k8s.io \"test\" is invalid: [status.reason: Invalid value: \"null\": status.reason in body must be of type string: \"null\", status.message: Invalid value: \"null\": status.message in body must be of type string: \"null\", status.phase: Invalid value: \"null\": status.phase in body must be of type string: \"null\"]"}
It is fixed in https://github.com/kubernetes/kubernetes/pull/95423 and I will test conversion of fields to pointers shortly, to be compatible with 1.19 too https://github.com/fluid-cloudnative/fluid/issues/1551#issuecomment-1072996131
Problem
Virtual cluster does not deploy with k8 v1.20.2. Output from vc-manager:
The namespace and secrets were created but none of the statefulsets from the ClusterVersion.
What I did
Build kubectl-vc
Create new CRDs
(see https://github.com/kubernetes-sigs/cluster-api-provider-nested/issues/62)
Install CRD
Create ns, rbac, deployment, ...
I've added
events
to the RBAC because of this:Create a new ClusterVersion
Had to remove kind and apiVersion below controllerManager: to match the schema:
Create a new VirtualCluster