Closed sbillinge closed 2 weeks ago
@cadenmyers13 @bobleesj please could you help me with this one. Figure out why it is failing and the fix?
@cadenmyers13 @bobleesj please could you help me with this one. Figure out why it is failing and the fix?
Since the voxels at qmin
and qmax
are not included anymore, the comparison of result
with np.allclose
to test_sofq_cut_10to40px
and test_sofq_cut_15to35px
is failing. I'm thinking this is because the test data was created with the inclusion of qmin
and qmax
. By changing from <=
/>=
to <
/>
, we are now excluding qmin
and qmax
. We could probably fix this by changing the test data files. Or I can search for another way...
@cadenmyers13 @bobleesj please could you help me with this one. Figure out why it is failing and the fix?
Since the voxels at
qmin
andqmax
are not included anymore, the comparison ofresult
withnp.allclose
totest_sofq_cut_10to40px
andtest_sofq_cut_15to35px
is failing. I'm thinking this is because the test data was created with the inclusion ofqmin
andqmax
. By changing from<=
/>=
to<
/>
, we are now excludingqmin
andqmax
. We could probably fix this by changing the test data files. Or I can search for another way...
The parameter atol
in np.allclose(array1, array2, atol)
allows you to set a tolerance. I set the tolerance at 80 and 40 for the two respective tests and it passed. This is probably not how we want to handle this, but this at least confirms that its the edge cases in the test data that are giving us the error.
@cadenmyers13 @bobleesj please could you help me with this one. Figure out why it is failing and the fix?
Since the voxels at
qmin
andqmax
are not included anymore, the comparison ofresult
withnp.allclose
totest_sofq_cut_10to40px
andtest_sofq_cut_15to35px
is failing. I'm thinking this is because the test data was created with the inclusion ofqmin
andqmax
. By changing from<=
/>=
to<
/>
, we are now excludingqmin
andqmax
. We could probably fix this by changing the test data files. Or I can search for another way...The parameter
atol
innp.allclose(array1, array2, atol)
allows you to set a tolerance. I set the tolerance at 80 and 40 for the two respective tests and it passed. This is probably not how we want to handle this, but this at least confirms that its the edge cases in the test data that are giving us the error.
Thanks for this @cadenmyers13 . Actually, the problem is my fault because I am not practicing what I teach! Instead of me fixing the code to give the right behavior, which causes the test to fail and asking you to fix it. I should have modified the test to give the behavior that I wanted (causing it to fail) and then asked you to fix the code. My apologies on that. Please could you pretend that I didn't do that, and come up with the right tests for the behavior that we want to replace existing tests? It is always important to have tests first fail and then pass (it may be passing because it is not working as intended). So my suggestion is to set the code back to <=
so the current tests pass again, then work on tests that test the right behavior and ensure they fail, then re-fix the code. @bobleesj just tagging you here to see the conversation. I need to do better here, but we want to actually gently change the testing culture in this direction.... and I will do my best to be disciplined here too......
@sbillinge yes will note that
@sbillinge lesson here:
During refactoring, modify the existing tests to reflect the new expected behavior. Expect them to fail initially. Then, update the code to make the tests pass.
@sbillinge got it. @bobleesj will let you know if I need help
@sbillinge a bit more context in the PR title would be helpful too, although the commit msg is clear.
@sbillinge a bit more context in the PR title would be helpful too, although the commit msg is clear.
sorry, my fault. I have fixed it.
@sbillinge To me, it seems like we want to change the test data rather than the test itself. The test is functioning how we want (by comparing slices of the array when cutoffs are applied). Since the test data is static we are comparing against a result we don't want. Below, the top image is what we want however the bottom image is what the test data shows.
@sbillinge Because this test data is used elsewhere in the test file its causing problems when I change it... working on a fix
closing. Replaced by #63