Due to uneven distribution of land mass, we get a variable number of valid pixels (ie, not the NODATA). This caused over or underestimation of Slurm runtimes.
For Milan nodes with 126 cores and 500 GB ram, we can safely fit between 50,000 to 100,000 pixels per run without incurring error. Cores are reduced to 80 or 90 though.
Moreover, it takes longer time for optimizer in lower latitudes, again, likely because optimizer is hard at work finding snow when there is none!
Proposal here is to adapt the number of pixels submitted for Blender for maximum efficiency and throughput defined as follows:
fit one-year run in one Slurm job submission (currently around ~600 slurm jobs)
Due to uneven distribution of land mass, we get a variable number of valid pixels (ie, not the NODATA). This caused over or underestimation of Slurm runtimes.
For Milan nodes with 126 cores and 500 GB ram, we can safely fit between 50,000 to 100,000 pixels per run without incurring error. Cores are reduced to 80 or 90 though.
Moreover, it takes longer time for optimizer in lower latitudes, again, likely because optimizer is hard at work finding snow when there is none!
Proposal here is to adapt the number of pixels submitted for Blender for maximum efficiency and throughput defined as follows: