Closed drbenvincent closed 5 years ago
This test we've just added is ok, and means we can remove debugging_parameter_recovery.py
. However it is not easy to parameterise to different models as we had-define true models. I believe I already solved this issue elsewhere where we generate faux true parameters for any model simply by drawing 1 sample from the prior over parameters.
Failing tests
Notes
Failing tests
test_model_design_integration_risky[Hyperbolic]
no designstest_model_design_integration_risky[ProportionalDifference]
no designstest_model_design_integration_delayed_and_risky[MultiplicativeHyperbolic]
no designstest_update_beliefs[ConstantSensitivity]
this is model specific due to exponent b
parameter having negative values. This is a known problem with this model and I'll get to it at some point.There's a problem in generating the full set of initial designs for the risky models.
I think my work here is done. We now have some nice parametric testing of how models and designs integrate together, doing a mini parameter recovery.
What may change is the API of how we instantiate the design things appropriate for each experiment type, see #36.
Need to test all combinations of design and model (appropriate for that design) so that we can double check everything all works together nicely.
dev/
folder which we've made redundant.test_model_design_integration.py
Fix model/design problems that come up
Chase down and fix failing tests which indicate errors in certain model/design combinations