openyurtio / yurt-app-manager

The workload controller manager from NodePool level in OpenYurt cluster
Apache License 2.0
6 stars 1 forks source link

[BUG] config `YurtAppSet.topology.Pool.Patch` invalid #139

Open SQxiaoxiaomeng opened 1 year ago

SQxiaoxiaomeng commented 1 year ago

What happened:

first step

cat <<EOF | kubectl apply -f -
apiVersion: apps.openyurt.io/v1alpha1
kind: YurtAppSet
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: ud-test
spec:
  selector:
    matchLabels:
      app: ud-test
  workloadTemplate:
    deploymentTemplate:
      metadata:
        labels:
          app: ud-test
      spec:
        template:
          metadata:
            labels:
              app: ud-test
          spec:
            containers:
              - name: nginx
                image: nginx:1.19.3
  topology:
    pools:
    - name: beijing
      nodeSelectorTerm:
        matchExpressions:
        - key: apps.openyurt.io/nodepool
          operator: In
          values:
          - beijing
      replicas: 1
      patch:
        spec:
          template:
            spec:
              containers:
                - name: nginx
                  image: nginx:1.19.0
    - name: hangzhou
      nodeSelectorTerm:
        matchExpressions:
        - key: apps.openyurt.io/nodepool
          operator: In
          values:
          - hangzhou
      replicas: 2
      tolerations:
      - effect: NoSchedule
        key: apps.openyurt.io/example
        operator: Exists
  revisionHistoryLimit: 5
EOF

second step

Execute kubectl get yas ud-test -oyaml,we get following yaml

...
spec:
  revisionHistoryLimit: 5
  selector:
    matchLabels:
      app: ud-test
  topology:
    pools:
    - name: beijing
      nodeSelectorTerm:
        matchExpressions:
        - key: apps.openyurt.io/nodepool
          operator: In
          values:
          - beijing
      patch: {}
      replicas: 1
    - name: hangzhou
      nodeSelectorTerm:
        matchExpressions:
        - key: apps.openyurt.io/nodepool
          operator: In
          values:
          - hangzhou
      replicas: 2
      tolerations:
      - effect: NoSchedule
        key: apps.openyurt.io/example
        operator: Exists
...

patch in beijing nodePool has been pruned. And two deployments have same sepc.template.spec.containers as following

[root@kind-k8s yurt-app-manager]# kubectl get deploy -owide
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS         IMAGES                             SELECTOR
ud-test-beijing-5djcs    0/1     1            0           31m   nginx              nginx:1.19.3                       app=ud-test,apps.openyurt.io/pool-name=beijing
ud-test-hangzhou-wgxv4   0/2     2            0           31m   nginx              nginx:1.19.3                       app=ud-test,apps.openyurt.io/pool-name=hangzhou

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"

SQxiaoxiaomeng commented 1 year ago

should add // +kubebuilder:pruning:PreserveUnknownFields to github.com/openyurtio/yurt-app-manager/pkg/yurtappmanager/apis/apps/v1alpha1/yurtappset_types.go:Pool.Patch

// Pool defines the detail of a pool.
type Pool struct {
    // Indicates pool name as a DNS_LABEL, which will be used to generate
    // pool workload name prefix in the format '<deployment-name>-<pool-name>-'.
    // Name should be unique between all of the pools under one YurtAppSet.
    // Name is NodePool Name
    Name string `json:"name"`

    // Indicates the node selector to form the pool. Depending on the node selector,
    // pods provisioned could be distributed across multiple groups of nodes.
    // A pool's nodeSelectorTerm is not allowed to be updated.
    // +optional
    NodeSelectorTerm corev1.NodeSelectorTerm `json:"nodeSelectorTerm,omitempty"`

    // Indicates the tolerations the pods under this pool have.
    // A pool's tolerations is not allowed to be updated.
    // +optional
    Tolerations []corev1.Toleration `json:"tolerations,omitempty"`

    // Indicates the number of the pod to be created under this pool.
    // +required
    Replicas *int32 `json:"replicas,omitempty"`

    // Indicates the patch for the templateSpec
    // Now support strategic merge path :https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/#notes-on-the-strategic-merge-patch
    // Patch takes precedence over Replicas fields
    // If the Patch also modifies the Replicas, use the Replicas value in the Patch
    // +kubebuilder:pruning:PreserveUnknownFields
    // +optional
    Patch *runtime.RawExtension `json:"patch,omitempty"`
}
SQxiaoxiaomeng commented 1 year ago

copy from k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/types.go preserveUnknownFields disables pruning of object fields which are not specified in the OpenAPI schema. apiVersion, kind, metadata and known fields inside metadata are always preserved. Defaults to true in v1beta and will default to false in v1.

in v0.6.0, crd apiVersion is apiextensions.k8s.io/v1, and in v0.5.0 crd version is apiextensions.k8s.io/v1beta1. So this bug not happened in v0.5.0

smallbearstar commented 1 year ago

copy from k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/types.go preserveUnknownFields disables pruning of object fields which are not specified in the OpenAPI schema. apiVersion, kind, metadata and known fields inside metadata are always preserved. Defaults to true in v1beta and will default to false in v1.

in v0.6.0, crd apiVersion is apiextensions.k8s.io/v1, and in v0.5.0 crd version is apiextensions.k8s.io/v1beta1. So this bug not happened in v0.5.0

你好,我按照你的方式修改两个文件,然后,执行 make generate ,但是报错误: /root/gopath/src/yurt-app-manager/bin/controller-gen "crd:crdVersions=v1" rbac:roleName=manager-role webhook paths="./..." paths="./pkg/yurtappmanager/..." output:crd:artifacts:config=config/yurt-app-manager/crd/bases output:rbac:artifacts:config=config/yurt-app-manager/rbac output:webhook:artifacts:config=config/yurt-app-manager/webhook /bin/sh: /root/gopath/src/yurt-app-manager/bin/controller-gen: No such file or directory

怎么解决呢?

SQxiaoxiaomeng commented 1 year ago

@smallbearstar install controller-gen in /root/gopath/src/yurt-app-manager/bin/controller-gen,but run make generate will install controller-gen automatically.

smallbearstar commented 1 year ago

@smallbearstar install controller-gen in /root/gopath/src/yurt-app-manager/bin/controller-gen,but run make generate will install controller-gen automatically.

那我修改文件后,需要做什么,才能使得openyurt 的crd 支持patch 字段呢?我现在需要patch这个功能,please help me!

SQxiaoxiaomeng commented 1 year ago

@smallbearstar 你是用all_in_one.yaml部署吗,找到YurtAppSet这个CRD资源,在patch字段下加上x-kubernetes-preserve-unknown-fields: true,像下面这样,再重新apply

                        patch:
                          description: Indicates the patch for the templateSpec Now
                            support strategic merge path :https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/#notes-on-the-strategic-merge-patch
                            Patch takes precedence over Replicas fields If the Patch
                            also modifies the Replicas, use the Replicas value in
                            the Patch
                          type: object
                          x-kubernetes-preserve-unknown-fields: true
kadisi commented 1 year ago

x-kubernetes-preserve-unknown-fields

we may update all_in_one.yaml file . @SQxiaoxiaomeng