kubebb / core

A declarative component lifecycle management platform
https://kubebb.github.io/website
Apache License 2.0
8 stars 9 forks source link

When using valuesFrom, set does not work in ComponentPlan #278

Closed laihezhao closed 1 year ago

laihezhao commented 1 year ago

ComponentPlan.yaml: apiVersion: core.kubebb.k8s.com.cn/v1alpha1 kind: ComponentPlan metadata: name: mesh-anywhere namespace: kubebb-system spec: approved: true name: mesh-anywhere version: v5.7.0 wait: true override: valuesFrom:

the registry is set to 192.168.0.11 in the configMap , but I actually want to use registry 192.168.0.10, I apply the file, finally I find it does not work

Abirdcfly commented 1 year ago

Thanks for your feedback @laihezhao. 🙏 Let me find out why.

Abirdcfly commented 1 year ago

Let's test it with a public helm chart package. https://artifacthub.io/packages/helm/bitnami/nginx/15.0.2

Use Helm

If we use helm command:

# cat <<EOF > values.yaml
replicaCount: 2
EOF

# helm install nginx bitnami/nginx --version 15.0.2 -f values.yaml --set image.registry=ddd.ccc

then we can see here are two replicas, and image is updated

# kubectl get po 
NAME                     READY   STATUS             RESTARTS   AGE
nginx-78969ff46b-gn4rh   0/1     ImagePullBackOff   0          14s
nginx-78969ff46b-mvpw6   0/1     ImagePullBackOff   0          14s
nginx-78969ff46b-gn4rh ```yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: "2023-08-22T02:30:10Z" generateName: nginx-78969ff46b- labels: app.kubernetes.io/instance: nginx app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: nginx helm.sh/chart: nginx-15.0.2 pod-template-hash: 78969ff46b name: nginx-78969ff46b-gn4rh namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: nginx-78969ff46b uid: ec16a67f-8644-40a9-a547-aeac8af9abab resourceVersion: "5919" uid: 9341f4cd-2393-43b0-97ce-e39008a34d0f spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/instance: nginx app.kubernetes.io/name: nginx topologyKey: kubernetes.io/hostname weight: 1 automountServiceAccountToken: false containers: - env: - name: BITNAMI_DEBUG value: "false" - name: NGINX_HTTP_PORT_NUMBER value: "8080" image: ddd.ccc/bitnami/nginx:1.25.1-debian-11-r0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 6 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 tcpSocket: port: http timeoutSeconds: 5 name: nginx ports: - containerPort: 8080 name: http protocol: TCP readinessProbe: failureThreshold: 3 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 tcpSocket: port: http timeoutSeconds: 3 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kubebb-core-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default shareProcessNamespace: false terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 status: conditions: - lastProbeTime: null lastTransitionTime: "2023-08-22T02:30:10Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-08-22T02:30:10Z" message: 'containers with unready status: [nginx]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-08-22T02:30:10Z" message: 'containers with unready status: [nginx]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-08-22T02:30:10Z" status: "True" type: PodScheduled containerStatuses: - image: ddd.ccc/bitnami/nginx:1.25.1-debian-11-r0 imageID: "" lastState: {} name: nginx ready: false restartCount: 0 started: false state: waiting: message: 'rpc error: code = Unknown desc = failed to pull and unpack image "ddd.ccc/bitnami/nginx:1.25.1-debian-11-r0": failed to resolve reference "ddd.ccc/bitnami/nginx:1.25.1-debian-11-r0": failed to do request: Head "https://ddd.ccc/v2/bitnami/nginx/manifests/1.25.1-debian-11-r0": dial tcp: lookup ddd.ccc on 192.168.65.254:53: no such host' reason: ErrImagePull hostIP: 172.18.0.2 phase: Pending podIP: 10.244.0.35 podIPs: - ip: 10.244.0.35 qosClass: BestEffort startTime: "2023-08-22T02:30:10Z" ```

Use ComponentPlan

ComponentPlan ```yaml apiVersion: core.kubebb.k8s.com.cn/v1alpha1 kind: ComponentPlan metadata: name: nginx-test namespace: kube-system spec: approved: true component: name: repository-bitnami-sample.nginx namespace: kubebb-system name: nginx-test override: valuesFrom: - kind: ConfigMap name: nginx-test valuesKey: values.yaml set: - image.registry=ddd.ccc version: 15.0.2 --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-test namespace: kube-system data: values.yaml: | replicaCount: 2 ```

If we apply this, we will get a installed componentplan:

ComponentPlan Status ```yaml apiVersion: v1 items: - apiVersion: core.kubebb.k8s.com.cn/v1alpha1 kind: ComponentPlan metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"core.kubebb.k8s.com.cn/v1alpha1","kind":"ComponentPlan","metadata":{"annotations":{},"name":"nginx-test","namespace":"kube-system"},"spec":{"approved":true,"component":{"name":"repository-bitnami-sample.nginx","namespace":"kubebb-system"},"name":"nginx-test","override":{"set":["image.registry=ddd.ccc"],"valuesFrom":[{"kind":"ConfigMap","name":"nginx-test","valuesKey":"values.yaml"}]},"version":"15.0.2"}} creationTimestamp: "2023-08-22T02:34:25Z" finalizers: - core.kubebb.k8s.com.cn/finalizer generation: 1 labels: core.kubebb.k8s.com.cn/componentplan-release: nginx-test name: nginx-test namespace: kube-system resourceVersion: "6378" uid: 28c3bf84-db32-414e-8f81-546736b6c822 spec: approved: true component: name: repository-bitnami-sample.nginx namespace: kubebb-system name: nginx-test override: set: - image.registry=ddd.ccc valuesFrom: - kind: ConfigMap name: nginx-test valuesKey: values.yaml version: 15.0.2 status: conditions: - lastTransitionTime: "2023-08-22T02:34:25Z" reason: "" status: "True" type: Approved - lastTransitionTime: "2023-08-22T02:34:29Z" reason: InstallSuccess status: "True" type: Actioned - lastTransitionTime: "2023-08-22T02:34:29Z" reason: "" status: "True" type: Succeeded images: - ddd.ccc/bitnami/nginx:1.25.1-debian-11-r0 installedRevision: 1 latest: true observedGeneration: 1 resources: - NewCreated: true apiVersion: v1 kind: Service name: nginx-test - NewCreated: true apiVersion: apps/v1 kind: Deployment name: nginx-test kind: List metadata: resourceVersion: "" ```

and this helm release will also has 2 replicas and image is updated.

kubectl get po -n kube-system
Found existing alias for "kubectl get". You should use: "kg"
NAME                                                READY   STATUS             RESTARTS   AGE
coredns-57575c5f89-4rj7g                            1/1     Running            0          32m
coredns-57575c5f89-4wsjn                            1/1     Running            0          32m
etcd-kubebb-core-control-plane                      1/1     Running            0          32m
kindnet-rtbhp                                       1/1     Running            0          32m
kube-apiserver-kubebb-core-control-plane            1/1     Running            0          32m
kube-controller-manager-kubebb-core-control-plane   1/1     Running            0          32m
kube-proxy-jflg9                                    1/1     Running            0          32m
kube-scheduler-kubebb-core-control-plane            1/1     Running            0          32m
nginx-test-85f6c4974b-6st6h                         0/1     ImagePullBackOff   0          40s
nginx-test-85f6c4974b-vjg45                         0/1     ImagePullBackOff   0          40s
nginx-test-85f6c4974b-vjg45 ```yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: "2023-08-22T02:34:29Z" generateName: nginx-test-85f6c4974b- labels: app.kubernetes.io/instance: nginx-test app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: nginx helm.sh/chart: nginx-15.0.2 pod-template-hash: 85f6c4974b name: nginx-test-85f6c4974b-vjg45 namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: nginx-test-85f6c4974b uid: 4dfa8767-068e-4065-be31-76960e9f4d7b resourceVersion: "6520" uid: d5ab5256-860e-44a8-9c9b-471053ce50eb spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/instance: nginx-test app.kubernetes.io/name: nginx topologyKey: kubernetes.io/hostname weight: 1 automountServiceAccountToken: false containers: - env: - name: BITNAMI_DEBUG value: "false" - name: NGINX_HTTP_PORT_NUMBER value: "8080" image: ddd.ccc/bitnami/nginx:1.25.1-debian-11-r0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 6 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 tcpSocket: port: http timeoutSeconds: 5 name: nginx ports: - containerPort: 8080 name: http protocol: TCP readinessProbe: failureThreshold: 3 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 tcpSocket: port: http timeoutSeconds: 3 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: kubebb-core-control-plane preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default shareProcessNamespace: false terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 status: conditions: - lastProbeTime: null lastTransitionTime: "2023-08-22T02:34:29Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-08-22T02:34:29Z" message: 'containers with unready status: [nginx]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-08-22T02:34:29Z" message: 'containers with unready status: [nginx]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-08-22T02:34:29Z" status: "True" type: PodScheduled containerStatuses: - image: ddd.ccc/bitnami/nginx:1.25.1-debian-11-r0 imageID: "" lastState: {} name: nginx ready: false restartCount: 0 started: false state: waiting: message: 'rpc error: code = Unknown desc = failed to pull and unpack image "ddd.ccc/bitnami/nginx:1.25.1-debian-11-r0": failed to resolve reference "ddd.ccc/bitnami/nginx:1.25.1-debian-11-r0": failed to do request: Head "https://ddd.ccc/v2/bitnami/nginx/manifests/1.25.1-debian-11-r0": dial tcp: lookup ddd.ccc on 192.168.65.254:53: no such host' reason: ErrImagePull hostIP: 172.18.0.2 phase: Pending podIP: 10.244.0.36 podIPs: - ip: 10.244.0.36 qosClass: BestEffort startTime: "2023-08-22T02:34:29Z" ```

From the above example, we can see that the component plan is working normally, and we may encounter a more edge situation.

laihezhao commented 1 year ago

I updated the image and re-verified it and the test passed, Thank you!