pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
82.18k stars 22.09k forks source link

DISABLED test_ddp_apply_optim_in_backward_ignored_params (__main__.TestDistBackendWithSpawn) #106361

Open pytorch-bot[bot] opened 1 year ago

pytorch-bot[bot] commented 1 year ago

Platforms: linux

This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.

Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.

Debugging instructions (after clicking on the recent samples link): DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets:

  1. Click on the workflow logs linked above
  2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
  3. Grep for test_ddp_apply_optim_in_backward_ignored_params
  4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.

Test file path: distributed/test_distributed_spawn.py

cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu

pytorch-bot[bot] commented 1 year ago
Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below: * Test name: `test_ddp_apply_optim_in_backward_ignored_params (__main__.TestDistBackendWithSpawn)` * Platforms for which to skip the test: linux * Disabled by `pytorch-bot[bot]` Within ~15 minutes, `test_ddp_apply_optim_in_backward_ignored_params (__main__.TestDistBackendWithSpawn)` will be disabled in PyTorch CI for these platforms: linux. Please verify that your test name looks correct, e.g., `test_cuda_assert_async (__main__.TestCuda)`. To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified. ``` Platforms: case-insensitive, list, of, platforms ``` We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows.
pytorch-bot[bot] commented 1 year ago

Another case of trunk flakiness has been found here. Please verify the issue was opened after this instance, that the platforms list includes all of [linux], or disable bot might not be working as expected.

wconstab commented 1 year ago

stack trace from last trunk failure log:

  distributed/test_distributed_spawn.py::TestDistBackendWithSpawn::test_ddp_apply_optim_in_backward_ignored_params <- ../../../../opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py INFO:numba.cuda.cudadrv.driver:init
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] Could not retrieve traceback for timed out process: 0
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] Process 1 timed out with traceback: 
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] 
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] Thread 0x00007fc1e8fbc700 (most recent call first):
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   <no Python frame>
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] 
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] Thread 0x00007fc2118ba700 (most recent call first):
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   <no Python frame>
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] 
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] Current thread 0x00007fc2135fe700 (most recent call first):
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 620 in _event_listener
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/threading.py", line 953 in run
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/threading.py", line 1016 in _bootstrap_inner
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/threading.py", line 973 in _bootstrap
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] 
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] Thread 0x00007fc2a3c41080 (most recent call first):
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/distributed/utils.py", line 265 in _verify_param_shape_across_processes
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 797 in __init__
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 5111 in test_ddp_apply_optim_in_backward_ignored_params
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 174 in wrapper
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2363 in wrapper
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 543 in wrapper
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 657 in run_test
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/distributed/distributed_test.py", line 590 in _run
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/process.py", line 108 in run
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/process.py", line 314 in _bootstrap
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/spawn.py", line 129 in _main
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "/opt/conda/envs/py_3.10/lib/python3.10/multiprocessing/spawn.py", line 116 in spawn_main
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR]   File "<string>", line 1 in <module>
  Error: 8-01 02:19:37,634] torch.testing._internal.common_distributed: [ERROR] 
  ('RERUN', {'yellow': True}) [305.2724s] [100%]
pytorch-bot[bot] commented 11 months ago

Resolving the issue because the test is not flaky anymore after 10260 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive

pytorch-bot[bot] commented 11 months ago

Another case of trunk flakiness has been found here. Reopening the issue to disable. Please verify that the platforms list includes all of [linux].