kubeflow / spark-operator

Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
Apache License 2.0
2.77k stars 1.38k forks source link

ERROR: flag provided but not defined: -label-selector-filter #1157

Open sahil-sawhney opened 3 years ago

sahil-sawhney commented 3 years ago

Facing error while deploying helm chart, using the following command to helm install

helm install spark-operator/spark-operator --namespace spark-operator --set webhook.enable=true --set serviceAccounts.spark.name=spark --generate-name --set image.tag=v1beta2-1.1.2-2.4.5

This behaviour is noticed after a recent commit on the repo with git sha 4f50304 (12 hours a go from the time of writing)

Shouldn't setting of labelSelectorFilter be optional?

And even if I provide the labelSelectorFilter as follows, I reciev same error

helm install spark-operator/spark-operator --namespace spark-operator --set webhook.enable=true --set serviceAccounts.spark.name=spark --generate-name --set image.tag=v1beta2-1.1.2-2.4.5 --set labelSelectorFilter=app=spark

sahil-sawhney commented 3 years ago

The error logs of the Spark Operator Pod looks as follows

++ id -u
+ myuid=0
++ id -g
+ mygid=0
+ set +e
++ getent passwd 0
+ uidentry=root:x:0:0:root:/root:/bin/bash
+ set -e
+ echo 0
0
0
+ echo 0
+ echo root:x:0:0:root:/root:/bin/bash
root:x:0:0:root:/root:/bin/bash
+ [[ -z root:x:0:0:root:/root:/bin/bash ]]
+ exec /usr/bin/tini -s -- /usr/bin/spark-operator -v=2 -logtostderr -namespace= -ingress-url-format= -controller-threads=10 -resync-interval=30 -enable-batch-scheduler=false -label-selector-filter=app=spark -enable-metrics=true -metrics-labels=app_type -metrics-port=10254 -metrics-endpoint=/metrics -metrics-prefix= -enable-webhook=true -webhook-svc-namespace=spark-operator -webhook-port=8080 -webhook-svc-name=spark-operator-1612545317-webhook -webhook-config-name=spark-operator-1612545317-webhook-config -webhook-namespace-selector= -enable-resource-quota-enforcement=false
flag provided but not defined: -label-selector-filter
Usage of /usr/bin/spark-operator:
  -alsologtostderr
        log to standard error as well as files
  -controller-threads int
        Number of worker threads used by the SparkApplication controller. (default 10)
  -enable-batch-scheduler
        Enable batch schedulers for pods' scheduling, the available batch schedulers are: (volcano).
  -enable-metrics
        Whether to enable the metrics endpoint.
  -enable-resource-quota-enforcement
        Whether to enable ResourceQuota enforcement for SparkApplication resources. Requires the webhook to be enabled.
  -enable-webhook
        Whether to enable the mutating admission webhook for admitting and patching Spark pods.
  -ingress-url-format string
        Ingress URL format.
  -kubeConfig string
        Path to a kube config. Only required if out-of-cluster.
  -leader-election
        Enable Spark operator leader election.
  -leader-election-lease-duration duration
        Leader election lease duration. (default 15s)
  -leader-election-lock-name string
        Name of the ConfigMap for leader election. (default "spark-operator-lock")
  -leader-election-lock-namespace string
        Namespace in which to create the ConfigMap for leader election. (default "spark-operator")
  -leader-election-renew-deadline duration
        Leader election renew deadline. (default 14s)
  -leader-election-retry-period duration
        Leader election retry period. (default 4s)
  -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
  -log_dir string
        If non-empty, write log files in this directory
  -logtostderr
        log to standard error instead of files
  -master string
        The address of the Kubernetes API server. Overrides any value in kubeconfig. Only required if out-of-cluster.
  -metrics-endpoint string
        Metrics endpoint. (default "/metrics")
  -metrics-job-start-latency-buckets value
        Comma-separated boundary values (in seconds) for the job start latency histogram bucket; it accepts any numerical values that can be parsed into a 64-bit floating point (default [30 60 90 120 150 180 210 240 270 300])
  -metrics-labels value
        Labels for the metrics
  -metrics-port string
        Port for the metrics endpoint. (default "10254")
  -metrics-prefix string
        Prefix for the metrics.
  -namespace string
        The Kubernetes namespace to manage. Will manage custom resource objects of the managed CRD types for the whole cluster if unset.
  -resync-interval int
        Informer resync interval in seconds. (default 30)
  -stderrthreshold value
        logs at or above this threshold go to stderr
  -v value
        log level for V logs
  -vmodule value
        comma-separated list of pattern=N settings for file-filtered logging
  -webhook-ca-cert string
        Path to the X.509-formatted webhook CA certificate. (default "/etc/webhook-certs/ca-cert.pem")
  -webhook-cert-reload-interval duration
        Time between webhook cert reloads. (default 15m0s)
  -webhook-config-name string
        The name of the MutatingWebhookConfiguration object to create. (default "spark-webhook-config")
  -webhook-fail-on-error
        Whether Kubernetes should reject requests when the webhook fails.
  -webhook-namespace-selector string
        The webhook will only operate on namespaces with this label, specified in the form key1=value1,key2=value2. Required if webhook-fail-on-error is true.
  -webhook-port int
        Service port of the webhook server. (default 8080)
  -webhook-server-cert string
        Path to the X.509-formatted webhook certificate. (default "/etc/webhook-certs/server-cert.pem")
  -webhook-server-cert-key string
        Path to the webhook certificate key. (default "/etc/webhook-certs/server-key.pem")
  -webhook-svc-name string
        The name of the Service for the webhook server. (default "spark-webhook")
  -webhook-svc-namespace string
        The namespace of the Service for the webhook server. (default "spark-operator")
sahil-sawhney commented 3 years ago

The problem is with the helm chart version 1.0.7, which is the latest helm chart version. When I use the following command, with helm chart version 1.0.6, the spark operator installation happens as expected.

helm install spark-operator/spark-operator --namespace spark-operator --set webhook.enable=true --set serviceAccounts.spark.name=spark --generate-name --set image.tag=v1beta2-1.1.2-2.4.5 --version 1.0.6

duyet commented 3 years ago

Please use the image tag v1beta2-1.2.1-3.0.0 for operator.

dvaldivia commented 3 years ago

@duyet wouldn't that prevent you from running jobs on spark 2.4.5 ?

duyet commented 3 years ago

@duyet wouldn't that prevent you from running jobs on spark 2.4.5 ?

You can still use the spark 2.4.5 by this image tag

xpaulnim commented 2 years ago

If you are using a k8s v1.16+ and want to avoid the error

W1204 02:35:57.154887   21678 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition

Just download the latest release source code zip/tgz and comment out the line containing the offending argument in templates/deployment.yaml the install the local, modified chart.

Similar solution was posted in #1274 by @theofpa

github-actions[bot] commented 1 week ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.