Open gully opened 7 years ago
mgullysa at pfe27 in ~
$ qstat 2343993.pbspl1.nas.nasa.gov
Req'd Elap
JobID User Queue Jobname TSK Nds wallt S wallt Eff
-------------- -------- ----- ------- --- --- ----- - ----- ---
2343993.pbspl1 mgullysa devel STDIN 28 1 02:00 R 00:15 31%
mgullysa at gopc in ~/GitHub/jammer/sf/2M0136/m112/output/marley_grid/run02 on master [!]
$ time /home/mgullysa/GitHub/jammer/code/star_marley_beta.py --samples=5000 --incremental_save=50
keeping grid as is
Using the user defined prior in $jammer/sf/2M0136/m112/output/marley_grid/run02/user_prior.py
2017 Aug 18, 5:59 PM: 49/5000 = 1.0%
2017 Aug 18, 6:08 PM: 99/5000 = 2.0%
2017 Aug 18, 6:18 PM: 149/5000 = 3.0%
...
2017 Aug 19,12:17 AM: 4899/5000 = 98.0%
2017 Aug 19,12:20 AM: 4949/5000 = 99.0%
2017 Aug 19,12:24 AM: 4999/5000 = 100.0%
The end.
real 394m46.761s
user 4214m32.112s
sys 196m40.513s
So ~6.6 hours of wall-clock time, and something like 10.6 concurrent processes?
We want to know the stats on different systems.
devel queue on NASA Pleiades with
n_threads = n_cpus = 28
on the Broadwell system:Only half the processes were used at one time...