drbenvincent / darc_toolbox

Run adaptive decision making experiments
MIT License
16 stars 2 forks source link

add Model/Design integration tests #33

Closed drbenvincent closed 5 years ago

drbenvincent commented 5 years ago

Need to test all combinations of design and model (appropriate for that design) so that we can double check everything all works together nicely.

test_model_design_integration.py

Fix model/design problems that come up

Chase down and fix failing tests which indicate errors in certain model/design combinations

drbenvincent commented 5 years ago

This test we've just added is ok, and means we can remove debugging_parameter_recovery.py. However it is not easy to parameterise to different models as we had-define true models. I believe I already solved this issue elsewhere where we generate faux true parameters for any model simply by drawing 1 sample from the prior over parameters.

drbenvincent commented 5 years ago

Failing tests

Notes

  1. Sometimes we are getting no designs. I suspect this is because the true parameters being sampled from the prior are making it such that most designs are highly predictable, therefore we're pruning too many/all designs. It would be better to pick the median or modal values of the priors rather than a random sample.
drbenvincent commented 5 years ago

Failing tests

There's a problem in generating the full set of initial designs for the risky models.

drbenvincent commented 5 years ago

I think my work here is done. We now have some nice parametric testing of how models and designs integrate together, doing a mini parameter recovery.

What may change is the API of how we instantiate the design things appropriate for each experiment type, see #36.