loft-sh / vcluster

vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
https://www.vcluster.com
Apache License 2.0
6.92k stars 427 forks source link

loft-sh/vcluster-config not in sync with loft-sh/vcluster releases #2164

Open janwillies opened 2 months ago

janwillies commented 2 months ago

What happened?

I'm trying to create a default config from the types in vcluster-config repo and then installing a vcluster helm chart with the resulting values.yaml. Works fine if I take the latest alpha helm chart and HEAD from vcluster-config, but I'm having trouble finding the corresponding commit for the v0.20.0 release.

What did you expect to happen?

a git tag for vcluster-v0.20.0 in the vcluster-config repo

How can we reproduce it (as minimally and precisely as possible)?

create a values.yaml like this: https://go.dev/play/p/UofVmpq03g5 and try to install the vcluster-0.20.0 helm chart:

failed to install release: values don’t meet the specifications of the schema(s) in the following chart(s): vcluster:

  • integrations: Additional property kubeVirt is not allowed

The KubeVirt property was introduced July 10th (in bdf2eb4) but the EKS property only removed later July 23rd (in 0541349). Vcluster-0.20.0 complains that it has no idea of either one

Anything else we need to know?

No response

Host cluster Kubernetes version

```console $ kubectl version 1.31 ```

vcluster version

```console $ vcluster --version 0.20.0 ```

VCluster Config

``` controlPlane: advanced: globalMetadata: {} headlessService: {} serviceAccount: enabled: true virtualScheduler: {} workloadServiceAccount: enabled: true backingStore: database: embedded: {} external: {} etcd: deploy: headlessService: enabled: true service: enabled: true statefulSet: enableServiceLinks: true enabled: true highAvailability: replicas: 1 image: registry: registry.k8s.io repository: etcd tag: 3.5.13-0 persistence: volumeClaim: accessModes: - ReadWriteOnce enabled: true retentionPolicy: Retain size: 5Gi pods: {} resources: requests: cpu: 20m memory: 150Mi scheduling: podManagementPolicy: Parallel security: {} embedded: {} coredns: deployment: pods: {} replicas: 1 resources: limits: cpu: 1000m memory: 170Mi requests: cpu: 20m memory: 64Mi topologySpreadConstraints: - labelSelector: matchLabels: k8s-app: kube-dns maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule enabled: true service: spec: type: ClusterIP distro: k0s: image: repository: k0sproject/k0s tag: v1.30.2-k0s.0 resources: limits: cpu: 100m memory: 256Mi requests: cpu: 40m memory: 64Mi k3s: image: repository: rancher/k3s tag: v1.30.2-k3s1 resources: limits: cpu: 100m memory: 256Mi requests: cpu: 40m memory: 64Mi k8s: apiServer: enabled: true image: registry: registry.k8s.io repository: kube-apiserver tag: v1.30.2 controllerManager: enabled: true image: registry: registry.k8s.io repository: kube-controller-manager tag: v1.30.2 resources: limits: cpu: 100m memory: 256Mi requests: cpu: 40m memory: 64Mi scheduler: image: registry: registry.k8s.io repository: kube-scheduler tag: v1.30.2 hostPathMapper: {} ingress: annotations: nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" host: my-host.com pathType: ImplementationSpecific spec: tls: [] proxy: bindAddress: 0.0.0.0 port: 8443 service: enabled: true spec: type: ClusterIP serviceMonitor: {} statefulSet: enableServiceLinks: true highAvailability: leaseDuration: 60 renewDeadline: 40 replicas: 1 retryPeriod: 15 image: registry: ghcr.io repository: loft-sh/vcluster-pro persistence: binariesVolume: - emptyDir: {} name: binaries volumeClaim: accessModes: - ReadWriteOnce enabled: auto retentionPolicy: Retain size: 5Gi pods: {} probes: livenessProbe: enabled: true readinessProbe: enabled: true startupProbe: enabled: true resources: limits: ephemeral-storage: 8Gi memory: 2Gi requests: cpu: 200m ephemeral-storage: 400Mi memory: 256Mi scheduling: podManagementPolicy: Parallel security: containerSecurityContext: allowPrivilegeEscalation: false runAsGroup: 0 runAsUser: 0 experimental: deploy: host: {} vcluster: {} genericSync: clusterRole: {} role: {} isolatedControlPlane: {} multiNamespaceMode: {} syncSettings: setOwner: true virtualClusterKubeConfig: {} exportKubeConfig: context: "" secret: {} server: "" integrations: externalSecrets: sync: clusterStores: selector: {} externalSecrets: enabled: true stores: {} webhook: {} kubeVirt: apiService: service: {} sync: dataVolumes: {} virtualMachineClones: enabled: true virtualMachineInstanceMigrations: enabled: true virtualMachineInstances: enabled: true virtualMachinePools: enabled: true virtualMachines: enabled: true webhook: enabled: true metricsServer: apiService: service: {} nodes: true pods: true networking: advanced: clusterDomain: cluster.local proxyKubelets: byHostname: true byIP: true replicateServices: {} policies: centralAdmission: {} limitRange: default: cpu: "1" ephemeral-storage: 8Gi memory: 512Mi defaultRequest: cpu: 100m ephemeral-storage: 3Gi memory: 128Mi enabled: auto networkPolicy: fallbackDns: 8.8.8.8 outgoingConnections: ipBlock: cidr: 0.0.0.0/0 except: - 100.64.0.0/10 - 127.0.0.0/8 - 10.0.0.0/8 - 172.16.0.0/12 - 192.168.0.0/16 platform: true resourceQuota: enabled: auto quota: count/configmaps: 100 count/endpoints: 40 count/persistentvolumeclaims: 20 count/pods: 20 count/secrets: 100 count/services: 20 limits.cpu: 20 limits.ephemeral-storage: 160Gi limits.memory: 40Gi requests.cpu: 10 requests.ephemeral-storage: 60Gi requests.memory: 20Gi requests.storage: 100Gi services.loadbalancers: 1 services.nodeports: 0 scopeSelector: matchExpressions: [] rbac: clusterRole: enabled: auto role: enabled: true sync: fromHost: csiDrivers: enabled: auto csiNodes: enabled: auto csiStorageCapacities: enabled: auto events: enabled: true ingressClasses: {} nodes: selector: {} priorityClasses: {} runtimeClasses: {} storageClasses: enabled: auto toHost: configMaps: enabled: true endpoints: enabled: true ingresses: {} networkPolicies: {} persistentVolumeClaims: enabled: true persistentVolumes: {} podDisruptionBudgets: {} pods: enabled: true rewriteHosts: enabled: true initContainer: image: library/alpine:3.20 resources: limits: cpu: 30m memory: 64Mi requests: cpu: 30m memory: 64Mi priorityClasses: {} secrets: enabled: true serviceAccounts: {} services: enabled: true storageClasses: {} volumeSnapshots: {} telemetry: enabled: true ```