Currently there is no queue enforcement for Spark jobs. As we start using dynamic allocation feature of Spark, we can push Spark jobs to Org specific queues. I have mentioned all cases of queue enforcement below.
If dynamic resource allocation is enabled for selected spark version and application requires large container then schedule it into default queue by a default conf(spark.yarn.queue) in spark-defaults.conf.
If dynamic resource allocation is enabled for selected spark version and application requires small container then schedule it into Org specific queue if user has not provided queue. If user has provided queue then use that.
If dynamic resource allocation is disabled for selected spark version then schedule application into default queue by a default conf(spark.yarn.queue) in spark-defaults.conf.
Currently there is no queue enforcement for Spark jobs. As we start using dynamic allocation feature of Spark, we can push Spark jobs to Org specific queues. I have mentioned all cases of queue enforcement below.
If dynamic resource allocation is enabled for selected spark version and application requires large container then schedule it into default queue by a default conf(spark.yarn.queue) in spark-defaults.conf.
If dynamic resource allocation is enabled for selected spark version and application requires small container then schedule it into Org specific queue if user has not provided queue. If user has provided queue then use that.
If dynamic resource allocation is disabled for selected spark version then schedule application into default queue by a default conf(spark.yarn.queue) in spark-defaults.conf.