Closed sabinala closed 8 months ago
@SamWitty let me know if you'd like to make solution_mappings
a model fixture, but this way of doing it is working fine for now.
@sabinala , in the future could you request a review from just one person? Thanks!
Generally the best practice is to request a review from the person who created the issue you're addressing.
@sabinala , in the future could you request a review from just one person? Thanks!
Generally the best practice is to request a review from the person who created the issue you're addressing.
Sorry, that was my bad. I asked her to add me.
@sabinala , in the future could you request a review from just one person? Thanks! Generally the best practice is to request a review from the person who created the issue you're addressing.
Sorry, that was my bad. I asked her to add me.
All good! Happy to have your eyes on this too, I just didn't want it merged until I had a chance to take a look.
It just occurred to me that the separate tests for float
, negative int
, tensor(int)
, etc could be merged into a single test that takes a num_samples
fixture composed of float
, negative int
, tensor(int)
, etc.
@djinnome how would that work exactly? There is a fixture called NUM_SAMPLES
that just has [2]
. Would expanding that to something like [2, -2, 0, torch.tensor(2), ...]
cause problems elsewhere?
@djinnome or could you do it by making a new fixture called BAD_NUM_SAMPLES
?
@SamWitty requested changes have been made, this is ready for review (again). I think @djinnome has a good point about consolidating these tests. If adding a BAD_NUM_SAMPLES
and BAD_NUM_ITERATIONS
fixture is a good way to go about this, let me know and I'll make a new PR for it.
I think generating "bad" fixtures is a "good" way of going about consolidating the tests...
By the way, this pattern of:
Refactor to remove duplication.
is called "test-driven development" and is an effective way to write clean code that works!
I like @djinnome 's suggested change, and your (@sabinala 's) suggestion for how to implement it. I think given that it's a small change you can go ahead and make that change here in this PR, and then we'll merge it in.
Normally I would suggest having more smaller PRs, but the suggested change would directly overwrite the changes introduced here.
@SamWitty consolidation complete! Question: right now, I'm using MODEL_URLS
from fixtures.py
for the sample method tests, but this will have these tests run for every model in that list! Since I only want to test different inputs for num_samples
, would it be better to just grab the first url in the list? Or just leave it? (This is not as much of an issue for testing calibrate
since the tests are skipped if there's no data attached...which as of now only includes one model)
@SamWitty consolidation complete! Question: right now, I'm using
MODEL_URLS
fromfixtures.py
for the sample method tests, but this will have these tests run for every model in that list! Since I only want to test different inputs fornum_samples
, would it be better to just grab the first url in the list? Or just leave it? (This is not as much of an issue for testingcalibrate
since the tests are skipped if there's no data attached...which as of now only includes one model)
Better to just leave it. If the combinatorics get really cumbersome later we can think about how to lighten the testing load, but these tests are very very fast, so it shouldn't be a problem.
@SamWitty sounds good, this PR is ready then :)
This PR adds several tests to
test_interfaces.py
to check thatValueError
s are raised appropriately bycalibrate
,sample
andensemble_sample
. These tests were previously written in separate files usingunittest
. These files have been deleted and the tests are now written withpytest
.Closes #464