kubeflow / spark-operator

Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
Apache License 2.0
2.78k stars 1.38k forks source link

Custom environment variables provided in Kubernetes spark job is not getting picked up #2017

Open focode opened 5 months ago

focode commented 5 months ago

This is yaml of my spark job: `kind: SparkApplication metadata: name: operatordc1 namespace: spark spec: type: Java mode: cluster image: "xiotxpcdevcr.azurecr.io/spark-custom:release-8.0" imagePullPolicy: Always imagePullSecrets:

when I am describing the pod , I am getting only spark operator provided env values :

Environment: SPARK_USER: root SPARK_APPLICATION_ID: spark-699c7647354544e293cc2c12cda9e88e SPARK_DRIVER_BIND_ADDRESS: (v1:status.podIP) SPARK_LOCAL_DIRS: /var/data/spark-c6e072fb-2e09-4a07-8c58-0365eda4f362 SPARK_CONF_DIR: /opt/spark/conf

It is missing: name: spring.profiles.active value: "azure,secured"

SamBird commented 5 months ago

I take it you are using the Webhook?

I've observed the same behaviour recently. I believe the Mutating webhook injects these.

What's in your Operator logs?

Are you seeing TLS handshake errors to the K8s API server?

imtzer commented 5 months ago

Same as Unable to assign environment variables, check your webhook first @focode