Open drvdputt opened 3 months ago
Missed this.
I'm not actually sure the zero amplitudes result from numerical instability or "tail fitting", but rather their unimportance to the final continuum shape. $\tau$ is roughly the "amplitude at the peak", which as you say may be far higher and off to the red from the fitted wavelength regime. I don't think total power of a blackbody makes a lot of sense, since that is not anywhere constrained, and might encourage people to interpret it as total IR luminosity. Fixed wavelength amplitude might be OK, but given the fixed temperatures (typically) this would just be a constant numerical scaling of the amplitude by some set of factors of order 1-10 (guessing).
$\tau$ does intrinsically have surface brightness unit scaling. $\tau_\star$ can thus be interpreted as the fraction of the aperture occupied by the surface of stars (which we consider equivalent). We could normalize that to some fixed fractional value and use those for our internal $\tau$ units, but I'd like to be convinced there's a numerical issue in the first place.
Currently,
tau
is used as the strength of the blackbody (stellar continuum) and modified Blackbody (dust continuum) components, and the equation for evaluating the blackbody looks like thiswhere the fitted amplitude is reported as
tau
in theFeatures
table.In general, the fact the we often have to deal with the tails of the blackbody profile, makes them hard to use. In my experience, the typical order of magnitude of the resulting
tau
values can be very different, scaling strongly with the temperature of the blackbody. I feel like this causes numerical issues sometimes; very often, the minimizer will use only two components, while all the rest of the tau values are set to 0.We could think of an alternate parameterization / normalization for these components. A few ideas: