harrispopgen / mushi

[mu]tation [s]pectrum [h]istory [i]nference
https://harrispopgen.github.io/mushi/
MIT License
24 stars 6 forks source link

ramping regularization with TMRCA CDF #16

Closed wsdewitt closed 5 years ago

wsdewitt commented 5 years ago

I’ve begun doubting that the current regularization approach (where we ramp up regularization strength as we approach the coalescent horizon) has a coherent Bayesian interpretation. It would seem that a prior of a flat \mu(t) history would mean the regularizers would look uniform (not ramping up).

We justified this by saying we think the prior should be “winning” near the horizon since the data are less informative there. But I think that a prior without ramping weights will naturally win at those larger times because the likelihood term becomes less informative there—I don’t see how to justify the ramping as a way to help it win even more, unless we had a wrong likelihood (i.e. if we used least squares instead of Poisson random field).

This all becomes especially odd if we consider the \eta(t) fitting problem, since the ramping is based on the \eta(t)-dependent TMRCA CDF itself.

Some anecdata: I haven't been able to reproduce the high-t wiggles that originally motivated the ramping scheme in the old dement project. I'm therefore inclined to remove this entirely.

The "tempora incognita" story could instead be about the properties of the linear operator, which can tell us how observable \mu(t) is as we consider t approaching the horizon.

kamdh commented 5 years ago

If the method works fine without any ramping, I say ditch the ramp.