Closed supsupsap closed 3 months ago
Hello, This is a known issue (https://github.com/stackabletech/spark-k8s-operator/issues/342) which has been fixed in our latest release 24.03. If you need this to work on 23.11 you can use podOverrides to add the volumes to the Spark job container. You'll want to check I've got the indentation right, but your volume/mount should look something like this:
spec:
job:
podOverrides:
spec:
volumes:
- name: ivy-config
configMap:
name: ivy-config
containers:
- name: spark-submit
volumeMounts:
- name: ivy-config
mountPath: /ivy/
readOnly: true
Thank you!
Affected Stackable version
23.11.0
Affected Apache Spark-on-Kubernetes version
spark-k8s:3.5.0
Current and expected behavior
I have created a simple configMap and am trying to provide it as a volume to the containers in the job
In driver and executors containers its mounted as expected. But not in Job container. So I cant pass modified ivy settings to spark.
Possible solution
No response
Additional context
No response
Environment
No response
Would you like to work on fixing this bug?
None