Closed enderdzz closed 2 years ago
Thanks for reporting this! That's indeed an unexpected and weird behavior. I'm a bit confused at the moment why the resulting logits do not match, as it appears like the inputs that in the end get passed to the actual model (so after the entire pre-processing thingy is done) should match exactly.
Thanks for your reply, I've figured out the problem : )
When using a cuda device you need to ensure that the data precision type is torch.float64
, i.e. torch.cuda.DoubleTensor
, so that the resulting precision error does not exceed the threshold. Although in the cpu case, both float32 and float64 can pass this test.
Alright. As this doesn't seem to indicate any issue with the functionality of the transform I'll close this issue for now.
Describe the bug
When I test this project with this command:
pytest --pdb --cov=foolbox --cov-append --backend pytorch
it occurs errors in test_transform_bounds[pytorch_shufflenetv2-bounds1] case:
To Reproduce
Use GPU, export CUDA_VISIBLE_DEVICES="0"
Minimized script:
Expected behavior
This test case should be passed with cuda.
Or should the test threshold be adjusted a bit, i.e.
rtol=1e-4, atol=1e-4
?Software (please complete the following information):
Additional context
I also tried
torchvision.models.shufflenet_v2_x1_0
andtorchvision.models.mobilenet_v2
, neither of them passed this test.However, when I set
export CUDA_VISIBLE_DEVICES=""
, not use the GPU, this test case is PASSED.