Open danfengliu opened 2 years ago
According to the apimachinery code, the valid operator includes "=" , "==", "in", "notin", "exists", "!", "gt", "lt". "!=" is not included. https://github.com/kubernetes/apimachinery/blob/v0.22.2/pkg/apis/meta/v1/helpers.go#L105-L149
Update to now, https://github.com/kubernetes/apimachinery has no version of code to support "!=" as operator yet.
Please check velero doc content. And also "!=" is supported by kubectl label selection, hence it should be supported by Velero.
After reading the k8s document and code, I think you are right. Since k8s already supports, Velero can adopt the way k8s works.
Put related code in comments for record: k8s.io/apimachinery/pkg/labels/selector.go parseRequirement() parseOperator() visitBySelector()
I think this is viable. The error is reported here: https://github.com/vmware-tanzu/velero/blob/main/pkg/cmd/util/flag/labelselector.go#L38-L45 In this function, the string will be passed into LabelSelector
type LabelSelector struct {
// matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
// map is equivalent to an element of matchExpressions, whose key field is "key", the
// operator is "In", and the values array contains only "value". The requirements are ANDed.
// +optional
MatchLabels map[string]string `json:"matchLabels,omitempty" protobuf:"bytes,1,rep,name=matchLabels"`
// matchExpressions is a list of label selector requirements. The requirements are ANDed.
// +optional
MatchExpressions []LabelSelectorRequirement `json:"matchExpressions,omitempty" protobuf:"bytes,2,rep,name=matchExpressions"`
}
LabelSelector is a common query option in k8s client.
In kubectl code, the string either not decoded in client (in this case, the string will be sent to api-server, and api-server will check whether it is valid), or decoded into labels.Selector:
// Selector represents a label selector.
type Selector interface {
// Matches returns true if this selector matches the given set of labels.
Matches(Labels) bool
// Empty returns true if this selector does not restrict the selection space.
Empty() bool
// String returns a human readable string that represents this selector.
String() string
// Add adds requirements to the Selector
Add(r ...Requirement) Selector
// Requirements converts this interface into Requirements to expose
// more detailed selection information.
// If there are querying parameters, it will return converted requirements and selectable=true.
// If this selector doesn't want to select anything, it will return selectable=false.
Requirements() (requirements Requirements, selectable bool)
// Make a deep copy of the selector.
DeepCopySelector() Selector
// RequiresExactMatch allows a caller to introspect whether a given selector
// requires a single specific label to be set, and if so returns the value it
// requires.
RequiresExactMatch(label string) (value string, found bool)
}
This interface's related parse function can deal with "!="
In Velero code, the "LabelSelector" is mainly used in "pkg/backup/item_collector.go" https://github.com/vmware-tanzu/velero/blob/main/pkg/backup/item_collector.go#L235-L254
LabelSelector is converted from LabelSelector into Selector. I think it's possible that Velero store it as Selector from the beginning.
Of course, if change, it may impact the backup function. We should do enough test to guarantee no behavior change and bug.
Even exists / doesnotexist filters do not work as expected in velero
An error occurred: invalid argument "l exists" for "-l, --selector" flag: couldn't parse the selector string "l exists": unable to parse requirement: found 'exists', expected: in, notin, =, ==, !=, gt, lt
This is also due to the issue in the function ParseToLabelSelector used in CLI
The official documentation says that it has been supported since 1.8, why does 1.9 still prompt that it is not supported.
Not in is okay, should the official documentation be revised to avoid misleading?
Try to resolve this issue by PR #5974.
After changing --selector
from metav1.LabelSelector
to labels.Selector
, "!=" operator is supported, but this needs changing backup.Spec.LabelSelector
type from metav1.LabelSelector
to string
, because metav1.LabelSelector
doesn't support !=
in k8s.io/apimachinery
, and labels.Selector
is an interface, it cannot be used by k8s CRD framework.
Updating CRD's parameter type is tricky. This makes Velero cannot sync the previous version's backup. It fails with JSON unmarshal error.
ERRO[0007] Error getting backup metadata from backup store backup=test01 backupLocation=velero/default controller=backup-sync error="json: cannot unmarshal object into Go struct field BackupSpec.spec.labelSelector of type string" error.file="/root/go/src/github.com/blackpiglet/velero/pkg/persistence/object_store.go:309" error.function="github.com/vmware-tanzu/velero/pkg/persistence.(*objectBackupStore).GetBackupMetadata" logSource="/root/go/src/github.com/blackpiglet/velero/pkg/controller/backup_sync_controller.go:147"
Found this issue #1218.
The comments referenced in reply are:
// Note:
// There are two different styles of label selectors used in versioned types:
// an older style which is represented as just a string in versioned types, and a
// newer style that is structured. LabelSelector is an internal representation for the
// latter style.
I think just removing the !=
example from velero.io
related document should be enough.
+1 Since we are close to v1.11 FC and the problem is not introduced in recent releases, let's update the doc and defer it.
Compared the !=
and notin
operator in --selector
parameter.
component notin (velero)
is the same as component!=velero
.
Users can use notin
to replace not-supported !=
operator.
./_output/bin/linux/amd64/velero backup create --include-resources=pods --selector="component notin (velero)" test05
./_output/bin/linux/amd64/velero backup describe test05 --details
Name: test05
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.25.6-gke.1000
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=25
Phase: Completed
Errors: 0
Warnings: 0
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: pods
Excluded: <none>
Cluster-scoped: auto
Label selector: component notin (velero)
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
CSISnapshotTimeout: 10m0s
ItemOperationTimeout: 1h0m0s
Hooks: <none>
Backup Format Version: 1.1.0
Started: 2023-03-20 02:33:27 +0000 UTC
Completed: 2023-03-20 02:33:29 +0000 UTC
Expiration: 2023-04-19 02:33:27 +0000 UTC
Resource List:
v1/Pod:
- kube-system/event-exporter-gke-755c4b4d97-rc96b
- kube-system/fluentbit-gke-2jh8b
- kube-system/fluentbit-gke-g4fhg
- kube-system/fluentbit-gke-n287x
- kube-system/gke-metrics-agent-5ssgc
- kube-system/gke-metrics-agent-6r76f
- kube-system/gke-metrics-agent-mlhv8
- kube-system/konnectivity-agent-868f84fc89-j25cq
- kube-system/konnectivity-agent-868f84fc89-q6wtp
- kube-system/konnectivity-agent-868f84fc89-r79h6
- kube-system/konnectivity-agent-autoscaler-7dc78c8c9-fqzvs
- kube-system/kube-dns-7b9b6ffbd9-6k8nk
- kube-system/kube-dns-7b9b6ffbd9-rxrrz
- kube-system/kube-dns-autoscaler-5f56f8997c-vw9wg
- kube-system/kube-proxy-gke-jxun-test-default-pool-277f9feb-tfap
- kube-system/kube-proxy-gke-jxun-test-default-pool-3c6f2e6e-krl1
- kube-system/kube-proxy-gke-jxun-test-default-pool-5b9bc518-g4ws
- kube-system/l7-default-backend-5cd9c9cdf7-vdwcc
- kube-system/metrics-server-v0.5.2-67864775dc-267vs
- kube-system/pdcsi-node-6g4zw
- kube-system/pdcsi-node-6skj9
- kube-system/pdcsi-node-f52fv
- upgrade/hello-app-7579d6745b-lkw85
- upgrade01/hello-app-7579d6745b-4brz8
Velero-Native Snapshots: <none included>
CSI Volume Snapshots: <none included>
kubectl get pods -A -l "component notin (velero)"
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system event-exporter-gke-755c4b4d97-rc96b 2/2 Running 0 18d
kube-system fluentbit-gke-2jh8b 2/2 Running 0 18d
kube-system fluentbit-gke-g4fhg 2/2 Running 0 18d
kube-system fluentbit-gke-n287x 2/2 Running 0 18d
kube-system gke-metrics-agent-5ssgc 2/2 Running 0 9d
kube-system gke-metrics-agent-6r76f 2/2 Running 0 9d
kube-system gke-metrics-agent-mlhv8 2/2 Running 0 9d
kube-system konnectivity-agent-868f84fc89-j25cq 1/1 Running 0 9d
kube-system konnectivity-agent-868f84fc89-q6wtp 1/1 Running 0 9d
kube-system konnectivity-agent-868f84fc89-r79h6 1/1 Running 0 9d
kube-system konnectivity-agent-autoscaler-7dc78c8c9-fqzvs 1/1 Running 0 9d
kube-system kube-dns-7b9b6ffbd9-6k8nk 4/4 Running 0 18d
kube-system kube-dns-7b9b6ffbd9-rxrrz 4/4 Running 0 18d
kube-system kube-dns-autoscaler-5f56f8997c-vw9wg 1/1 Running 0 18d
kube-system kube-proxy-gke-jxun-test-default-pool-277f9feb-tfap 1/1 Running 0 18d
kube-system kube-proxy-gke-jxun-test-default-pool-3c6f2e6e-krl1 1/1 Running 0 18d
kube-system kube-proxy-gke-jxun-test-default-pool-5b9bc518-g4ws 1/1 Running 0 18d
kube-system l7-default-backend-5cd9c9cdf7-vdwcc 1/1 Running 0 18d
kube-system metrics-server-v0.5.2-67864775dc-267vs 2/2 Running 0 18d
kube-system pdcsi-node-6g4zw 2/2 Running 0 18d
kube-system pdcsi-node-6skj9 2/2 Running 0 18d
kube-system pdcsi-node-f52fv 2/2 Running 0 18d
upgrade hello-app-7579d6745b-lkw85 1/1 Running 444 (46m ago) 18d
upgrade01 hello-app-7579d6745b-4brz8 1/1 Running 444 (46m ago) 18d
kubectl get pods -A -l "component!=velero"
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system event-exporter-gke-755c4b4d97-rc96b 2/2 Running 0 18d
kube-system fluentbit-gke-2jh8b 2/2 Running 0 18d
kube-system fluentbit-gke-g4fhg 2/2 Running 0 18d
kube-system fluentbit-gke-n287x 2/2 Running 0 18d
kube-system gke-metrics-agent-5ssgc 2/2 Running 0 9d
kube-system gke-metrics-agent-6r76f 2/2 Running 0 9d
kube-system gke-metrics-agent-mlhv8 2/2 Running 0 9d
kube-system konnectivity-agent-868f84fc89-j25cq 1/1 Running 0 9d
kube-system konnectivity-agent-868f84fc89-q6wtp 1/1 Running 0 9d
kube-system konnectivity-agent-868f84fc89-r79h6 1/1 Running 0 9d
kube-system konnectivity-agent-autoscaler-7dc78c8c9-fqzvs 1/1 Running 0 9d
kube-system kube-dns-7b9b6ffbd9-6k8nk 4/4 Running 0 18d
kube-system kube-dns-7b9b6ffbd9-rxrrz 4/4 Running 0 18d
kube-system kube-dns-autoscaler-5f56f8997c-vw9wg 1/1 Running 0 18d
kube-system kube-proxy-gke-jxun-test-default-pool-277f9feb-tfap 1/1 Running 0 18d
kube-system kube-proxy-gke-jxun-test-default-pool-3c6f2e6e-krl1 1/1 Running 0 18d
kube-system kube-proxy-gke-jxun-test-default-pool-5b9bc518-g4ws 1/1 Running 0 18d
kube-system l7-default-backend-5cd9c9cdf7-vdwcc 1/1 Running 0 18d
kube-system metrics-server-v0.5.2-67864775dc-267vs 2/2 Running 0 18d
kube-system pdcsi-node-6g4zw 2/2 Running 0 18d
kube-system pdcsi-node-6skj9 2/2 Running 0 18d
kube-system pdcsi-node-f52fv 2/2 Running 0 18d
upgrade hello-app-7579d6745b-lkw85 1/1 Running 444 (49m ago) 18d
upgrade01 hello-app-7579d6745b-4brz8 1/1 Running 444 (49m ago) 18d
What steps did you take and what happened:
What did you expect to happen:
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
velero version
):velero client config get features
):kubectl version
):/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.