ciemss / pyciemss

Causal and probabilistic reasoning with continuous time dynamical systems
Other
17 stars 6 forks source link

Removing `unittest` in favor of `pytest` #465

Closed sabinala closed 8 months ago

sabinala commented 8 months ago

This PR adds several tests to test_interfaces.py to check that ValueErrors are raised appropriately by calibrate, sample and ensemble_sample. These tests were previously written in separate files using unittest. These files have been deleted and the tests are now written with pytest.

Closes #464

sabinala commented 8 months ago

@SamWitty let me know if you'd like to make solution_mappings a model fixture, but this way of doing it is working fine for now.

SamWitty commented 8 months ago

@sabinala , in the future could you request a review from just one person? Thanks!

Generally the best practice is to request a review from the person who created the issue you're addressing.

djinnome commented 8 months ago

@sabinala , in the future could you request a review from just one person? Thanks!

Generally the best practice is to request a review from the person who created the issue you're addressing.

Sorry, that was my bad. I asked her to add me.

SamWitty commented 8 months ago

@sabinala , in the future could you request a review from just one person? Thanks! Generally the best practice is to request a review from the person who created the issue you're addressing.

Sorry, that was my bad. I asked her to add me.

All good! Happy to have your eyes on this too, I just didn't want it merged until I had a chance to take a look.

djinnome commented 8 months ago

It just occurred to me that the separate tests for float, negative int, tensor(int), etc could be merged into a single test that takes a num_samples fixture composed of float, negative int, tensor(int), etc.

sabinala commented 8 months ago

@djinnome how would that work exactly? There is a fixture called NUM_SAMPLES that just has [2]. Would expanding that to something like [2, -2, 0, torch.tensor(2), ...] cause problems elsewhere?

sabinala commented 8 months ago

@djinnome or could you do it by making a new fixture called BAD_NUM_SAMPLES?

sabinala commented 8 months ago

@SamWitty requested changes have been made, this is ready for review (again). I think @djinnome has a good point about consolidating these tests. If adding a BAD_NUM_SAMPLES and BAD_NUM_ITERATIONS fixture is a good way to go about this, let me know and I'll make a new PR for it.

djinnome commented 8 months ago

I think generating "bad" fixtures is a "good" way of going about consolidating the tests...

djinnome commented 8 months ago

By the way, this pattern of:

  1. Add a little test.
  2. Run all tests and fail.
  3. Make a little change.
  4. Run the tests and succeed.
  5. Refactor to remove duplication.

    is called "test-driven development" and is an effective way to write clean code that works!

SamWitty commented 8 months ago

I like @djinnome 's suggested change, and your (@sabinala 's) suggestion for how to implement it. I think given that it's a small change you can go ahead and make that change here in this PR, and then we'll merge it in.

Normally I would suggest having more smaller PRs, but the suggested change would directly overwrite the changes introduced here.

sabinala commented 8 months ago

@SamWitty consolidation complete! Question: right now, I'm using MODEL_URLS from fixtures.py for the sample method tests, but this will have these tests run for every model in that list! Since I only want to test different inputs for num_samples, would it be better to just grab the first url in the list? Or just leave it? (This is not as much of an issue for testing calibrate since the tests are skipped if there's no data attached...which as of now only includes one model)

SamWitty commented 8 months ago

@SamWitty consolidation complete! Question: right now, I'm using MODEL_URLS from fixtures.py for the sample method tests, but this will have these tests run for every model in that list! Since I only want to test different inputs for num_samples, would it be better to just grab the first url in the list? Or just leave it? (This is not as much of an issue for testing calibrate since the tests are skipped if there's no data attached...which as of now only includes one model)

Better to just leave it. If the combinatorics get really cumbersome later we can think about how to lighten the testing load, but these tests are very very fast, so it shouldn't be a problem.

sabinala commented 8 months ago

@SamWitty sounds good, this PR is ready then :)