Open katilp opened 4 days ago
For the record, some values:
$ kubectl top pods -n argo | grep runpfnano
pfnano-process-j8rcs-runpfnano-template-3774586556 976m 1016Mi
pfnano-process-j8rcs-runpfnano-template-855329759 1000m 1040Mi
This indicates that the requested value for the memory 2.3 GB is too high, we could lower that. This was a short 1000 event job.
However, I believe that in most of the tested cases, the number of jobs running parallel was limited by the CPU request.
Also, for longer jobs, the memory consumption rises but then stays constant at ~1.4 GB:
$ kubectl logs -n argo pfnano-process-2zlsb-runpfnano-template-2591478655 | tail -1
Begin processing the 3501st record. Run 281693, Event 2613810442, LumiSection 1402 on stream 0 at 20-Sep-2024 10:38:06.394 CEST
$ kubectl top pods -n argo | grep runpfnano
pfnano-process-2zlsb-runpfnano-template-2591478655 1000m 1425Mi
$ kubectl logs -n argo pfnano-process-2zlsb-runpfnano-template-2591478655 | tail -1
Begin processing the 6501st record. Run 281693, Event 2614782877, LumiSection 1403 on stream 0 at 20-Sep-2024 10:46:12.513 CEST
$ kubectl top pods -n argo | grep runpfnano
pfnano-process-2zlsb-runpfnano-template-2591478655 999m 1442Mi
$ kubectl logs -n argo pfnano-process-2zlsb-runpfnano-template-2591478655 | tail -1
Begin processing the 9501st record. Run 281693, Event 2617846667, LumiSection 1405 on stream 0 at 20-Sep-2024 10:54:19.178 CEST
$ kubectl top pods -n argo | grep runpfnano
pfnano-process-2zlsb-runpfnano-template-2591478655 999m 1441Mi
Confirming with a longer job that the memory consumption stays within these numbers, after 40k events:
$ kubectl logs -n argo pfnano-process-g2m2k-runpfnano-template-2136346710 | tail -1
Begin processing the 42001st record. Run 281707, Event 1683425878, LumiSection 1019 on stream 0 at 20-Sep-2024 14:08:05.702 CEST
$ kubectl logs -n argo pfnano-process-g2m2k-runpfnano-template-1439319107 | tail -1
Begin processing the 45501st record. Run 281727, Event 528718176, LumiSection 328 on stream 0 at 20-Sep-2024 14:08:08.298 CEST
$ kubectl top pods -n argo | grep runpfnano
pfnano-process-g2m2k-runpfnano-template-1439319107 998m 1394Mi
pfnano-process-g2m2k-runpfnano-template-2136346710 999m 1460Mi
1. Start workflow
As discussed, add a function to run the start workflow, it must be run to get the images on the node before the actual run to avoid several simultaneous image pulls on the same node.
Add a monitor_start function that samples the resource usage values of nodes and all
runpfnano
pods.For nodes it is:
kubectl get nodes
For pods, you could do something like
kubectl get pods -n argo | grep runpfnano
That will allow us (or users) to understand the unconstrained CPU and memory needs of the jobs.
2. Command-line inputs to argo submit and terraform apply to avoid sed
As discussed,
sed
is a bit brutal. Better use arguments when possible2.1 Argo submit
Argo submit can take the global workflow parameters with
where stringArray would be e.g. nJobs="6".
Careful with quotes in the script, note that:
Edit: However, this does not matter:
-p nJobs=3
works as well2.2 Terraform apply
You can pass the variables to terraform with the
-var
flag, e.g.and in the script
To be confirmed that it works properly for strings (machine type) and numerical values (n nodes)
This will avoid modifying the tfvars file.