Closed barkls closed 4 years ago
Hello @barkls! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
holopy/inference/result.py
:Line 263:80: E501 line too long (98 > 79 characters)
Travis failed on a test that had passed on my own machine and shouldn't be related to any code I touched. I restarted the Travis build and it passed, so something strange is going on that I expect was a pre-existing problem with reproducibility in one of the tests. Unfortunately I think there's now no record of the failing build or which test was problematic.
Ready to merge @briandleahy
Yes, I saw. I believe it was a test on a BoundedGaussian, testing that all samples returned were within the bounds. IIRC there was no seed test in the test, and it was only testing for 10 samples. (I was planning on discussing it with you at the holopy meeting yesterday, but I forgot that we were not meeting until next week.)
To me this seems like an edge case that happens once in a while (samples are generated outside the bounds).
I would say "someone" should do the following:
I can take a stab at that when I get a chance.
In the mean time, I'll approve this PR.
Prompted by #354 (Fixes #354). The docs imposed an arbitrary restriction that
fit
andsample
high-level functions could only take strategy names as strings, and not actualStrategy
objects, whereas themodel.fit
andmodel.sample
methods could handle either strategy names or objects. I found this confusing and arbitrary so I taught the high-level functions to use objects as well as names. However, this resulted in there being no difference in functionality between the high-level functions and the model methods. I decided to deprecate the model methods so there's only one way to do things, and it's the way that is most flexible and best documented. Note that these methods were introduced as part of the 3.0 development cycle and aren't described very well in the docs so we should be able to remove them with the rest of the old fitting infrastructure.