kubeflow / spark-operator

Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
Apache License 2.0
2.81k stars 1.38k forks source link

[BUG] minResources of volcano podGroup didn't take into account dynamicAllocation and memoryOverheadFactor #2244

Open kaka-zb opened 1 month ago

kaka-zb commented 1 month ago

Description

We've been using spark operator and volcano for a long time in production env, however, there are some problems with the calculation of resource usage for volcano podGroup when the sparkapp is submitted.

The spark.dynamicAllocation.* & spark.kubernetes.memoryOverheadFactor params of spark are not taken into account when calculating memory of minResources for volcano podGroup. As a result, the calculated minResources maybe smaller than real usage of sparkapp, and the gang scheduling maybe fail.

Reproduction Code [Required]

Expected behavior

Actual behavior

Environment & Versions

Additional context

kaka-zb commented 1 month ago

BTW, i see that resourceusage directory implemented in yunikorn, and if you have no plan to support this for volcano, I can contribute our code for volcano, which has been verified by thousands times of spark task.

jacobsalway commented 1 month ago

Hey, I wrote the resourceusage module for the Yunikorn batch scheduler. When I implemented this initially we discussed pulling these functions out into a more generic module for use across other batch schedulers. If you have code that also calculates the resulting pod resource fields I'd be happy to review and hopefully improve the existing solution and update the existing Volcano batch scheduler.

kaka-zb commented 1 month ago

@jacobsalway Thanks for reply, i will submit a draft PR and then you can reveiw and then see if that helps.