sblunt / orbitize

Orbit-fitting for directly imaged objects
https://orbitize.info
Other
79 stars 44 forks source link

Added `check_prior_support()` to `sampler.MCMC()` #253

Closed adj-smith closed 3 years ago

adj-smith commented 3 years ago

Added a function to ensure that all current walker starting positions are valid, given the current priors. Added a call to this function in the run_sampler() method.

semaphoreP commented 3 years ago

Hi Adam, this is great. Could you write a test in our tests folder that checks this function works as expected? (i.e., it raises an exception when you expect it to, and it doesn't when everything is valid). Let me know if that makes sense. Happy to explain in more detail if needed.

adj-smith commented 3 years ago

Hopefully I handled the procedure correctly here. Still growing my github legs... :) I added another pull request for the test file.

semaphoreP commented 3 years ago

hey @adj-smith, looks like you are missing a : and it's causing a syntax error.

adj-smith commented 3 years ago

hey @adj-smith, looks like you are missing a : and it's causing a syntax error.

Thank you, fixed now. Apologies.

semaphoreP commented 3 years ago

hey @adj-smith, sorry, I was just thinking about this more, and I think we should test the values one set of orbital elements at a time, rather than testing all values for a given parameter. This is because we will probably support covariant priors in the future, where the prior probability depends on the joint value of two or more parameters at once. So instead of calling the compute_lnprob function of each individual prior, we should use the priors.all_lnpriors(params, priors) function and check each set of parameters. Let me know if this makes sense. Thanks!

adj-smith commented 3 years ago

Not a problem, I'll try to whip something up.

adj-smith commented 3 years ago

After some thinking, I implemented a version to use both checks in the same function, calling priors.all_lnpriors() after the initial loop. This is because error checking with all_lnpriors() is going to provide the user with less detailed information about which parameters are causing issues, since it checks the entire set at once; therefore it would be valuable to provide information where possible and only default to the generic failure if the failure is due to covariant prior issues. I do not immediately see a way around this.

I'm unsure how to modify my test to verify that this works. The existing test correctly passes with no changes, but I'm not sure how to implement a test to check the covariance version correctly fails the program. If you have suggestions I can apply them. However I've officially started classes, so my time is rapidly becoming more constricted...

semaphoreP commented 3 years ago

Ah ok, since we typically don't operate with covariant priors, why don't we do it your original way when there are no covariant priors. When there are covariant priors, we have to do it with priors.all_lnpriors() because the function to compute the prior probability of a single parameter for a covariant prior is undefined (the probability lies on the all parameters being passed in), so your method actually won't work with covariant priors (at least how we implemented them).

I actually just realized that covariant priors have not been merged into main yet, but are part of PR #216, so we will need to modify some things there. For now, maybe give the Prior base class an attribute that is like correlated which is default to false. All priors that inherit it will also have that attribute as False by default, and only future priors we implement that are correlated will be set to true (which we can do in #216). And your code just loops through the list of priors to check if any prior ahs the attribute correlated set to true, and use the fall back priors.all_lnpriors() method if that is the case.

This issue is more complicated than I expected!

adj-smith commented 3 years ago

This should be correct now. I did not update the test file to examine this new functionality, I wasn't sure how to do so since there aren't any covariant priors (yet).

semaphoreP commented 3 years ago

I manually ran the tests and they passed. @sblunt, you can merge this when you like.