Closed SaeeDhawalikar closed 1 month ago
@SaeeDhawalikar can you also add columns summarizing memory used in each run? i guess we want to know the peak memory used, both maximum of individual processors and peak total memory. this may be the deciding factor in setting ncpus for sahyadri, and we will need to justify the number in our proposal later this year.
Following is a list of simulations that were decided to be run before the pilot study to check the scaling of gadget4: naming convention: LxNy refers to a box size of x Mpc/h with y^3 particles.
Default: L200N256: check scaling with number of cores, with N_cores= 64, 128, 256, (512) Target resolution: L50N512: check scaling with number of cores, with N_cores =64, 128, 256, (512)
L50N256: N_cores=128 L200N1024: N_cores=128
Total: 8 simulations, with 2 simulations each having: a. the same resolution (L50N256 & L200N1024) b. the same number of particles (L50N256 & L200N256) c. the same volume (L200N256& L200N1024, L50N256 & L50N512)
Lenovo: total shared memory available: 193184 MB (6GB per cpu)