Closed ipashchenko closed 5 years ago
Thanks for the issue and the compliment :-)
This looks like something specific to your model. It could be a bizarre severe phase transition, or perhaps just some sort of bug or numerical issue occasionally causing the log likelihood to spike. It would be hard for me to diagnose without knowing your model. Do you know if the point with log likelihood of -284740
should actually have that value? If you make that point in isolation is there anything unusual about it?
Thanks for the reply!
The model describes calibration of radio interferometric (VLBI) data. It has several parameters that describe a radio source and hundreds of the parameters that describe the instrumental factors. The last are latent variables of Gaussian processes with some kernels (parameters of which are currently fixed). There are some tricks used to identify the instrumental parameters. Naively sampling them without any constrains results in degeneracies (here the flexibility of the perturb
helped a lot). The most puzzling is that the model with the same structure but lower dimension (shorter observing time and, thus, number of instrumental parameters) samples just fine.
I printed out best_ever_particle
and found that abrupt change of logL
have occurred when ~3% of the instrumental parameters changed significantly. Couple of them have had quite unexpected values and I hope that this can be resolved with a tight prior. Nevertheless, is there anything one can do to handle such heavy phase transitions besides using more informed prior?
I was using the model setup that artificially introduced such severe phase transitions. Now all is fine. Closing the issue.
I'm really glad you solved this. Good luck with your application.
On Fri, May 10, 2019 at 1:42 AM Ilya notifications@github.com wrote:
Closed #32 https://github.com/eggplantbren/DNest4/issues/32.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/eggplantbren/DNest4/issues/32#event-2330274150, or mute the thread https://github.com/notifications/unsubscribe-auth/AAMBKOTO6RL4E5JCKZM2XADPUQS2XANCNFSM4HK7BS2Q .
-- Dr Brendon J. Brewer Department of Statistics, The University of Auckland, New Zealand Ph: +64 27 500 1336 Web: https://www.brendonbrewer.com/
Hi! When I try to sample a high-dimensional model (Ndim~500, fixed) the levels are created as usual (with higher and higher value of log likelihood). However, the level is created with much higher value of the log likelihood than is expected. That is the last levels are: -349527, -346412, -342993 and then suddenly -284740. This stops creating new levels - all particles are falling in the previous levels. I waited for a long time - no new levels were created. Here are the last 2 lines from
levels.txt
:Here are pictures:
For the same problem with ~100 dimensions the sampling works perfect and results are as the expected (i use artificially created data with known model). The issue remains for both the default volume ratio ~2.7 and 10. As far as i understand this may be some severe phase transition. The picture
logL vs log(X)
shows sudden rise atlog(X)=-400
as those in the 2014ArXiv DNest paper on trans-dimensional models (Top Figure 5 here atlog(X)=-10
). But it seems that in my case such phase transitions can not be easily handled. Here is the full picture oflevels vs iteration
dependence:Is this what is expected? What should I try to fix it? I really like
DNest
because of flexible proposal implementation (it would be non-trivial to implement my model withPolyChord
) and our model is inherently trans-dimensional (I saw trans-dimensional stuff inPolyChord
repo but never seen working implementations).Best Regards, Ilya
P.S. Thank you for such a nice tool!