Closed mkuemmel closed 2 years ago
Her the scaleability of SourceXtractor++ with pyston computed for the disk+bulge model from above: There is a broad shoulder from ~15-28 cores with a very god scaling. The processing node has 28 physical cores, using more than those (--> hyperthreading) affects the efficiency.
Here the results of a more extensive investigation using more data, two different models (sersic and disk+bulge) and two different SE++ versions:
I guess tests are still underway?
Yes, I ran over the weekend with the newest version, which includes also the iterative fitting (and of course pyston).
Here a comparison of the SE++ performance version 0.14 vs. 0.16 (December'21) with pyston and meta iterations:
Looks like the iterative fitting does cost some performance. On the other hand the example data is not very clumpy and not many priors are being used. 0.14 had still some bugs and the fitting of large groups (>2GB RAM) is not possible or shaky.
The performance values reported here are now showstoppers for the next release.
Well, this can be closed now.
I ran the deep fields from the morphology challenge in different configurations with v0.14 and with pyston. Let's discuss during tomorrow's telecon.
single sersic (levmar, 56c)
disk+bulge (levmar, 56c)
disk+bulge (levmar, 28c)
disk+bulge (levmar, 14c)
disk+sersic (levmar, 56c)
real morphology (levmar, 56c)
disk+bulge (gsl, 56c)