Closed MarcoCast79 closed 5 years ago
Ciao Marco, besides setting REBIN TEMPLATES = 10
you can:
use pure stellar templates (e.g. BC03) instead of stellar+nebular models
use a simplified model, adopting a constant SFH, varying only the parameters mass
, metallicity
and max_stellar_age
(besides redshift
), and perhaps tauV_eff
(you can use the CF00 attenuation curve, fixing mu
to the "standard" value of 0.4). You can also test on a small sample (e.g. 50 objects) the effect of fixing the metallicity
and removing tauV_eff
, but I don't think that this would affect much the speed
in the MCMC*.param file:
NUMBER OF WALKERS = 50
ENLARGMENT FACTOR = 0.9
EVIDENCE TOLERANCE = 1.00
MIN RELATIVE PROBABILITY = 1.E-03
to decrease the size of the output files)Thanks, I'll do as you suggest!
Just one last doubt, does SHRINK TEMPLATES WL RANGE
affect speed to some extent? I think I can shrink it a bit
Yes it does, especially if you remove points in the UV/optical (where models have higher resolution), but in general when you're fitting photo-z you want to keep the entire UV/optical... but you can certainly remove points redwards of the reddest observed band!
Dear Jacopo and all, I need to try a run on many objects with the sole purpose of estimating the photometric redshift from multi-band photometry, is there a "standard" BEAGLE recipe to optimize such a run in terms of computing time?
Surely I will put:
But: is it worth/feasible to leave out all other parameters, such as mass, metallicity etc and keep as "fitted"-type the redshift only? Do you have other suggestions?
From the paper I understand you used the same run for photoz and physical properties, but in my case I'm not (yet) interested in other source properties.
Thanks!