pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
82.94k stars 22.36k forks source link

DISABLED test_graph_grad_scaling_foreach_True_fused_False_SGD_cuda_float32 (__main__.TestCudaOptimsCUDA) #136437

Closed pytorch-bot[bot] closed 6 days ago

pytorch-bot[bot] commented 3 weeks ago

Platforms: linux

This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.

Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 18 failures and 6 successes.

Debugging instructions (after clicking on the recent samples link): DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets:

  1. Click on the workflow logs linked above
  2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
  3. Grep for test_graph_grad_scaling_foreach_True_fused_False_SGD_cuda_float32
  4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Sample error message ``` Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1232, in not_close_error_metas pair.compare() File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 711, in compare self._compare_values(actual, expected) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 841, in _compare_values compare_fn( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1023, in _compare_regular_values_close if torch.all(matches): RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/test_cuda.py", line 4783, in test_graph_grad_scaling self.assertEqual(weight.grad, torch.full_like(weight.grad, grad_val)) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3850, in assertEqual error_metas = not_close_error_metas( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1239, in not_close_error_metas f"Comparing\n\n" File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 378, in __repr__ body = [ File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 379, in f" {name}={value!s}," File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 523, in __repr__ return torch._tensor_str._str(self, tensor_contents=tensor_contents) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 708, in _str return _str_intern(self, tensor_contents=tensor_contents) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 625, in _str_intern tensor_str = _tensor_str(self, indent) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 357, in _tensor_str formatter = _Formatter(get_summarized_data(self) if summarize else self) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor_str.py", line 145, in __init__ nonzero_finite_vals = torch.masked_select( RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default) To execute this test, run the following from the base repo dir: PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 python test/test_cuda.py TestCudaOptimsCUDA.test_graph_grad_scaling_foreach_True_fused_False_SGD_cuda_float32 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ```

Test file path: test_cuda.py

cc @ptrblck @msaroufim @clee2000

pytorch-bot[bot] commented 3 weeks ago
Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below: * Test name: `test_graph_grad_scaling_foreach_True_fused_False_SGD_cuda_float32 (__main__.TestCudaOptimsCUDA)` * Platforms for which to skip the test: linux * Disabled by `pytorch-bot[bot]` Within ~15 minutes, `test_graph_grad_scaling_foreach_True_fused_False_SGD_cuda_float32 (__main__.TestCudaOptimsCUDA)` will be disabled in PyTorch CI for these platforms: linux. Please verify that your test name looks correct, e.g., `test_cuda_assert_async (__main__.TestCuda)`. To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified. ``` Platforms: case-insensitive, list, of, platforms ``` We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows. ### How to re-enable a test To re-enable the test globally, close the issue. To re-enable a test for only a subset of platforms, remove the platforms from the list in the issue body. This may take some time to propagate. To re-enable a test only for a PR, put `Fixes #136437` in the PR body and rerun the test jobs. Note that if a test is flaky, it maybe be difficult to tell if the test is still flaky on the PR.
pytorch-bot[bot] commented 2 weeks ago

Another case of trunk flakiness has been found here. The list of platforms [linux] appears to contain all the recently affected platforms [linux]. Either the change didn't propogate fast enough or disable bot might be broken.

pytorch-bot[bot] commented 6 days ago

Resolving the issue because the test is not flaky anymore after 2450 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive