pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
84.63k stars 22.79k forks source link

DISABLED test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_0_cuda (__main__.TestTransformersCUDA) #129853

Closed pytorch-bot[bot] closed 1 month ago

pytorch-bot[bot] commented 5 months ago

Platforms: linux

This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.

Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 6 failures and 2 successes.

Debugging instructions (after clicking on the recent samples link): DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets:

  1. Click on the workflow logs linked above
  2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
  3. Grep for test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_0_cuda
  4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Sample error message ``` Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/test_transformers.py", line 1097, in test_scaled_dot_product_attention assert gradcheck(lambda *args, **kwargs: File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4485, in gradcheck return torch.autograd.gradcheck(fn, inputs, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 2053, in gradcheck return _gradcheck_helper(**args) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 2082, in _gradcheck_helper _gradcheck_real_imag( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1492, in _gradcheck_real_imag gradcheck_fn( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 1922, in _fast_gradcheck analytical_vJu = _get_analytical_vJu_backward_mode( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 805, in _get_analytical_vJu_backward_mode all_vJ = _check_analytical_jacobian_attributes( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/gradcheck.py", line 791, in _check_analytical_jacobian_attributes raise GradcheckError( torch.autograd.gradcheck.GradcheckError: Backward is not reentrant, i.e., running backward with same input and grad_output multiple times gives different values, although analytical gradient matches numerical gradient.The tolerance for nondeterminism was 0.0. NOTE: If your op relies on non-deterministic operations i.e., it is listed here: https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html this failure might be expected. If you are adding a new operator, please file an issue and then use one of the workarounds. The workaround depends on how your test invokes gradcheck/gradgradcheck. If the test - manually invokes gradcheck/gradgradcheck, then call gradcheck/gradgradcheck with `nondet_tol=` as a keyword argument. - is OpInfo-based (e.g., in test_ops_gradients.py), then modify the OpInfo for the test to have `gradcheck_nondet_tol=`. - is a Module test (e.g., in common_nn.py), then modify the corresponding module_test entry to have `gradcheck_nondet_tol=` To execute this test, run the following from the base repo dir: python test/test_transformers.py -k TestTransformersCUDA.test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_0_cuda This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ```

Test file path: test_transformers.py

cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @clee2000

pytorch-bot[bot] commented 5 months ago
Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below: * Test name: `test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_0_cuda (__main__.TestTransformersCUDA)` * Platforms for which to skip the test: linux * Disabled by `pytorch-bot[bot]` Within ~15 minutes, `test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_0_cuda (__main__.TestTransformersCUDA)` will be disabled in PyTorch CI for these platforms: linux. Please verify that your test name looks correct, e.g., `test_cuda_assert_async (__main__.TestCuda)`. To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified. ``` Platforms: case-insensitive, list, of, platforms ``` We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows.
pytorch-bot[bot] commented 4 months ago

Resolving the issue because the test is not flaky anymore after 2850 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive

pytorch-bot[bot] commented 4 months ago

Another case of trunk flakiness has been found here. Reopening issue. The list of platforms [linux] appears to contain all the recently affected platforms [linux].

pytorch-bot[bot] commented 4 months ago

Another case of trunk flakiness has been found here. The list of platforms [linux] appears to contain all the recently affected platforms [linux]. Either the change didn't propogate fast enough or disable bot might be broken.

pytorch-bot[bot] commented 4 months ago

Another case of trunk flakiness has been found here. The list of platforms [linux] appears to contain all the recently affected platforms [linux]. Either the change didn't propogate fast enough or disable bot might be broken.

pytorch-bot[bot] commented 4 months ago

Another case of trunk flakiness has been found here. The list of platforms [linux] appears to contain all the recently affected platforms [linux]. Either the change didn't propogate fast enough or disable bot might be broken.

pytorch-bot[bot] commented 4 months ago

Resolving the issue because the test is not flaky anymore after 3000 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive

pytorch-bot[bot] commented 4 months ago

Another case of trunk flakiness has been found here. Reopening issue. The list of platforms [linux] appears to contain all the recently affected platforms [linux].

pytorch-bot[bot] commented 1 month ago

Resolving the issue because the test is not flaky anymore after 2550 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive