stackabletech / spark-k8s-operator

Operator for Apache Spark-on-Kubernetes for Stackable Data Platform
https://stackable.tech
Other
47 stars 2 forks source link

volumeMounts is not working for Job resource. #385

Closed supsupsap closed 3 months ago

supsupsap commented 3 months ago

Affected Stackable version

23.11.0

Affected Apache Spark-on-Kubernetes version

spark-k8s:3.5.0

Current and expected behavior

I have created a simple configMap and am trying to provide it as a volume to the containers in the job

volumes:
    - name: ivy-config
      configMap:
        name: ivy-config

driver:
  config:
    volumeMounts:
      - name: ivy-config
        mountPath: /ivy/
        readOnly: true

executor:  
  config:  
    volumeMounts:
      - name: ivy-config
        mountPath: /ivy/
        readOnly: true

job:  
  config:  
    volumeMounts:
      - name: ivy-config
        mountPath: /ivy/
        readOnly: true

In driver and executors containers its mounted as expected. But not in Job container. So I cant pass modified ivy settings to spark.

Possible solution

No response

Additional context

No response

Environment

No response

Would you like to work on fixing this bug?

None

Jimvin commented 3 months ago

Hello, This is a known issue (https://github.com/stackabletech/spark-k8s-operator/issues/342) which has been fixed in our latest release 24.03. If you need this to work on 23.11 you can use podOverrides to add the volumes to the Spark job container. You'll want to check I've got the indentation right, but your volume/mount should look something like this:

spec:
  job:
    podOverrides:
      spec:
        volumes:
        - name: ivy-config
          configMap:
            name: ivy-config
        containers:
        - name: spark-submit
          volumeMounts:
          - name: ivy-config
            mountPath: /ivy/
            readOnly: true
supsupsap commented 3 months ago

Thank you!