dipetkov / eems

Estimating Effective Migration Surfaces
GNU General Public License v2.0
102 stars 28 forks source link

question about updates of hyperparameters #18

Open stephens999 opened 6 years ago

stephens999 commented 6 years ago

https://github.com/dipetkov/eems/blob/7c3e9c0a29b2bea2057382d2b304dceac18b67b5/runeems_snps/src/eems.cpp#L602

is this line and the line above it correct? The likelihoods here are truncated normal, not normal, which seems to mess up the usual conjugacy?

dipetkov commented 6 years ago

Have not thought about this before but I think that you are right. Truncating scales the normal density by a constant, so the posterior of the variance parameter has the same form, up to a normalizign constant? Can this be fixed by sampling from the posterior and then computing the ratio of the normalizing constant (for the current value and the proposed value) and then accepting or rejecting the proposal according to a detailed balance equation?

stephens999 commented 6 years ago

I think it can be fixed that way, yes. But the "constant" you mention depends on the variance so it is not really a constant....

halasadi commented 6 years ago

Is it necessary to put a prior on nowmrateS2?

The prior on nowmrateS2 has two hyper-parameters and there is also a bound on the mrate such that it must be +-2 within the mean in the log10 scale. In total, that is 3 parameters not estimated from the data. I wonder what the difference of doing that versus just setting nowmrateS2 = 1 (where ~95% of the mass will be +- 2 within the mean). That will be just one parameter not estimated from the data.

EDIT: I've changed the code to implement just setting nowmrateS2 = 1 and the resulting migration surface looked quite a bit more rugged resulting from poorer MCMC mixing.

stephens999 commented 6 years ago

@dipetkov although this update is a "bug" there is some chance that fixing it will actually not be a good idea - Hussein and I have come to the conclusion that the default inverse gamma prior on omega is not really sensible (puts most mass on very large values of omega).

We are considering the following fix: put a uniform prior on log10(omega) in some range. Say [-3,0], so omega ranges from very small (0.001) to moderate (1).

With this we might also dispense with the truncation on e... the truncation of omega at 1 should strongly encourage e not to be too big?

Omega could be updated by a simple MH random walk (or actually one could maybe do better as I think with the (finite) uniform prior on log(omega) the posterior on omega may be truncated inverse gamma, and one could do a Gibbs step?