stackabletech / spark-k8s-operator

Operator for Apache Spark-on-Kubernetes for Stackable Data Platform
https://stackable.tech
Other
49 stars 2 forks source link

Docs: improve resource usage documentation #294

Closed adwk67 closed 9 months ago

adwk67 commented 9 months ago

As a user I want to have documentation that clearly descrbes what is and is not set with regard to pod resources. Specifically, the different combations that are possible:

For example, looking at the resources test here and the corresponding assert here it should be shown how the resulting values are arrived at:

Driver

yaml:

resources:
  cpu:
    min: 250m
    max: 500m
  memory:
    limit: 512Mi

pod:

resources:
  limits:
    cpu: "1"
    memory: 1Gi
  requests:
    cpu: "1"
    memory: 1Gi
Executor

yaml:

resources:
  cpu:
    min: 250m
    max: 1000m
  memory:
    limit: 1024Mi

pod:

resources:
  limits:
    cpu: "1"
    memory: 1Gi
  requests:
    cpu: "1"
    memory: 1Gi
Job

yaml:

resources:
  cpu:
    min: 250m
    max: 500m
  memory:
    limit: 512Mi

plus the yamls for driver/executor...

pod:

- args:
  - /stackable/spark/bin/spark-submit 
  ...
    --conf "spark.driver.cores=1" 
    --conf "spark.driver.memory=640m" 
    --conf "spark.executor.cores=1" 
    --conf "spark.executor.memory=640m" 
    --conf "spark.kubernetes.driver.limit.cores=1" 
    --conf "spark.kubernetes.driver.limit.memory=1024m"
    --conf "spark.kubernetes.driver.request.cores=1" 
    --conf "spark.kubernetes.driver.request.memory=1024m"
    --conf "spark.kubernetes.executor.limit.cores=1" 
    --conf "spark.kubernetes.executor.limit.memory=1024m"
    --conf "spark.kubernetes.executor.request.cores=1" 
    --conf "spark.kubernetes.executor.request.memory=1024m"
    --conf "spark.kubernetes.memoryOverheadFactor=0.0"
resources:
  limits:
    cpu: 400m
    memory: 512Mi
  requests:
    cpu: 100m
    memory: 512Mi