Closed mert-kurttutan closed 2 years ago
Thank you for looking into this! I would really appreciate a PR fixing these.
I think it should be fine to set that half precision test to run on GPU-only by moving the test to gpu_test.py.
For the other cases, we currently do not run these tests on GPUs (I don't think Github Actions supports this yet), but changes to support these tests passing on both CPU and GPU are welcome
I think we should move the first part of test_input_size_half_precision to gpu_test since the case is only about usage of input_size parameter and half.precision.
I am not sure about the second warning case since it is there to raise warning when half precision is used on CPU. Besides this issue, I am ready to submit PR.
Yep, that's what I meant
Then, I am moving only the first warning case to gpu_test and delete the second warning case since it raises already runtime error, right?
By second warning case, I mean the following in test_input_size_half_precision()
:
with pytest.warns(
UserWarning,
match=(
"Half precision is not supported on cpu. Set the `device` field or "
"pass `input_data` using the correct device."
),
):
summary(
test,
dtypes=[torch.float16],
input_data=torch.randn((10, 2), dtype=torch.float16, device="cpu"),
device="cpu",
)
Yep, we can remove that test case. We can leave the warning in code though, since it will warn users using earlier versions of PyTorch that do not raise a runtime error
Describe the bug Using current main branch (without any change in the code), several test cases fail
To Reproduce Steps to reproduce the behavior:
Expected behavior I think it's supposed to give no failed case (maybe a few warning)
Desktop:
More details After running pytest, summary output from pytest is the following (click for the detailed output):
``` ========================================================================================== short test summary info ========================================================================================== FAILED tests/exceptions_test.py::test_input_size_half_precision - RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: [] FAILED tests/torchinfo_test.py::test_pack_padded - RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: [Embedding: 1] FAILED tests/torchinfo_test.py::test_namedtuple - RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: [] FAILED tests/torchinfo_xl_test.py::test_eval_order_doesnt_matter - RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN te... ============================================================================ 4 failed, 65 passed, 1 skipped, 3 warnings in 8.47s ============================================================================ ```
``` ============================================================================================ test session starts ============================================================================================ platform linux -- Python 3.9.12, pytest-7.1.2, pluggy-1.0.0 rootdir: /home/mertkurttutan/Desktop/main/software-dev/pytest/torchinfo plugins: cov-3.0.0 collected 70 items tests/exceptions_test.py ...F. [ 7%] tests/gpu_test.py .. [ 10%] tests/half_precision_test.py ... [ 14%] tests/torchinfo_test.py ............................F......F.............. [ 85%] tests/torchinfo_xl_test.py ..F...s... [100%] ================================================================================================= FAILURES ================================================================================================== ______________________________________________________________________________________ test_input_size_half_precision _______________________________________________________________________________________ model = Linear(in_features=2, out_features=5, bias=True) x = [tensor([[0.6099, 0.2002], [0.7334, 0.5176], [0.0652, 0.5923], [0.8931, 0.7656], [0.12... [0.9878, 0.7974], [0.8638, 0.2712], [0.3899, 0.2676], [0.9009, 0.7832]], dtype=torch.float16)] batch_dim = None, cache_forward_pass = False, device = 'cpu', mode =3 warnings are not that important since they were just deprecation warnings from torchvision. They were resolved once I used the new input formats.
Regarding failed cases, the first one stems from the following runtime error:
It seems that in Pytorch v1.12, half precision is not supported on cpu ??? (Also see this remark here)
The other 3 failed cases were because inputs tensors are initialized outside summary and with no explicit device. So, they were created on cpu (in my case). But, the model created inside summary uses GPU automatically unless I make cuda unavailable because of the following line in torchinfo.py (summary function)
Indeed, these 3 cases were resolved once I used device="cpu" when running summary.