pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
83.08k stars 22.41k forks source link

DISABLED test_retrace_export_while_loop_simple_cpu_float32 (__main__.TestHOPCPU) #131768

Closed pytorch-bot[bot] closed 1 month ago

pytorch-bot[bot] commented 2 months ago

Platforms: linux

This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.

Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.

Debugging instructions (after clicking on the recent samples link): DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets:

  1. Click on the workflow logs linked above
  2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
  3. Grep for test_retrace_export_while_loop_simple_cpu_float32
  4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Sample error message Truncated for length ``` gs = (actual_args, args[1]) else: tokens = args[: len(meta.tokens)] args = args[len(meta.tokens) :] assert all(token.numel() == 0 for token in tokens) with disable_above: # Wrap inputs into functional wrappers f_args = pytree.tree_map(to_fun, args) f_tokens = pytree.tree_map(to_fun, tokens) # Populate the current FunctionalTensorMode with the tokens per # operator. See Note [FunctionalTensorMode is Stateful] functional_tensor_mode = torch.utils._python_dispatch._detect_infra_mode( torch._C._TorchDispatchModeKey.FUNCTIONAL ) assert functional_tensor_mode is not None for i, k in enumerate(meta.tokens.keys()): functional_tensor_mode._tokens[k] = f_tokens[i] # Run the joint > f_outs = fn(*f_args) aot_config = AOTConfig(fw_compiler=None, bw_compiler=None, partition_fn=None, decompositions={}, num_params_buffers=0, aot_id=10, keep_inference_input_mutations=False, is_export=True, no_tangents=True, dynamic_shapes=True, aot_autograd_arg_pos_to_source=None, static_input_indices=None, inference_compiler=None, enable_log=True, pre_dispatch=True, cache_key=None) args = (FakeTensor(..., size=(), dtype=torch.int64), FakeTensor(..., size=(2, 3, 4))) disable_above = f_args = (FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))) f_tokens = () fn = .flat_fn at 0x7f275b559ca0> functional_tensor_mode = meta = ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=False, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=False, keep_input_mutations=False)], output_info=[OutputAliasInfo(output_type=, raw_type=, base_idx=None, dynamic_dims=set(), requires_grad=False, functional_tensor=None), OutputAliasInfo(output_type=, raw_type=, base_idx=None, dynamic_dims=set(), requires_grad=False, functional_tensor=None)], num_intermediate_bases=0, keep_input_mutations=False, traced_tangents=[], subclass_inp_meta=[0, 1], subclass_fw_graph_out_meta=[0, 1], subclass_tangent_meta=[], is_train=False, traced_tangent_metas=None, num_symints_saved_for_bw=None, grad_enabled_mutation=None, deterministic=None, static_input_indices=[], tokens={}, indices_of_inputs_that_requires_grad_with_mutations_in_bw=[]) tokens = () trace_joint = False /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py:390: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))) @wraps(fn) def inner_fn(*args): > outs = fn(*args) args = (FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))) fn = .flat_fn at 0x7f275b559700> meta = ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=False, keep_input_mutations=False), InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=False, keep_input_mutations=False)], output_info=[OutputAliasInfo(output_type=, raw_type=, base_idx=None, dynamic_dims=set(), requires_grad=False, functional_tensor=None), OutputAliasInfo(output_type=, raw_type=, base_idx=None, dynamic_dims=set(), requires_grad=False, functional_tensor=None)], num_intermediate_bases=0, keep_input_mutations=False, traced_tangents=[], subclass_inp_meta=[0, 1], subclass_fw_graph_out_meta=[0, 1], subclass_tangent_meta=[], is_train=False, traced_tangent_metas=None, num_symints_saved_for_bw=None, grad_enabled_mutation=None, deterministic=None, static_input_indices=[], tokens={}, indices_of_inputs_that_requires_grad_with_mutations_in_bw=[]) /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py:74: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ flat_args = (FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))) args = [FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))] kwargs = {} def flat_fn(*flat_args): # The input are flattened tensor args. Prepare the args in the # order that original function expects. Add static args as well. # They will appear as tensor constants in the traced graph. nonlocal out_spec args, kwargs = pytree.tree_unflatten(flat_args, tensor_args_spec) > tree_out = fn(*args, **kwargs) args = [FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))] flat_args = (FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))) fn = .functional_call at 0x7f275b5593a0> kwargs = {} out_spec = tensor_args_spec = TreeSpec(tuple, None, [TreeSpec(list, None, [*, *]), TreeSpec(dict, [], [])]) /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_functorch/_aot_autograd/utils.py:178: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))) kwargs = {} def functional_call(*args, **kwargs): with stateless._reparametrize_module( mod, pytree.tree_unflatten(args[:params_len], params_spec) ): if isinstance(mod, torch.fx.GraphModule): with fx_traceback.preserve_node_meta(), warnings.catch_warnings(): warnings.filterwarnings( "ignore", "Anomaly Detection has been enabled." ) with torch.autograd.detect_anomaly(check_nan=False): detect_fake_mode().epoch += 1 > out = PropagateUnbackedSymInts(mod).run( *args[params_len:], **kwargs ) args = (FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))) kwargs = {} mod = GraphModule( (cond_fn_0): GraphModule() (body_fn_0): GraphModule() ) params_len = 0 params_spec = TreeSpec(dict, [], []) /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_functorch/_aot_autograd/traced_function_transforms.py:765: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = initial_env = None, enable_io_processing = True args = [FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))] pbar = @compatibility(is_backward_compatible=True) def run(self, *args, initial_env : Optional[Dict[Node, Any]] = None, enable_io_processing : bool = True) -> Any: """ Run `module` via interpretation and return the result. Args: *args: The arguments to the Module to run, in positional order initial_env (Optional[Dict[Node, Any]]): An optional starting environment for execution. This is a dict mapping `Node` to any value. This can be used, for example, to pre-populate results for certain `Nodes` so as to do only partial evaluation within the interpreter. enable_io_processing (bool): If true, we process the inputs and outputs with graph's process_inputs and process_outputs function first before using them. Returns: Any: The value returned from executing the Module """ self.env = initial_env if initial_env is not None else {} # Positional function args are consumed left-to-right by # `placeholder` nodes. Use an iterator to keep track of # position and extract those values. if enable_io_processing: args = self.graph.process_inputs(*args) self.args_iter : Iterator[Any] = iter(args) pbar = tqdm(total=len(self.graph.nodes), desc=f"{self.name}: {str(list(self.graph.nodes)) if config.verbose_progress else ''}", initial=0, position=0, leave=True, disable=config.disable_progress, delay=0) for node in self.graph.nodes: pbar.update(1) if node in self.env: # Short circuit if we have this value. This could # be used, for example, for partial evaluation # where the caller has pre-populated `env` with # values for a subset of the program. continue try: self.env[node] = self.run_node(node) except Exception as e: if self.extra_traceback: msg = f"While executing {node.format_node()}" msg = f'{e.args[0]}\n\n{msg}' if e.args else str(msg) msg += f"\nOriginal traceback:\n{node.stack_trace}" e.args = (msg,) + e.args[1:] if isinstance(e, KeyError): > raise RuntimeError(*e.args) from e E RuntimeError: _set_grad_enabled E E While executing %while_loop : [num_users=2] = call_function[target=torch.ops.higher_order.while_loop](args = (%cond_fn_0, %body_fn_0, (%l_args_0_, %l_args_1_), ()), kwargs = {}) E Original traceback: E File "export/test_hop.py", line 108, in forward E return op.op(*args) E File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/hop_db.py", line 148, in simple_while_loop E return torch._higher_order_ops.while_loop(cond_fn, body_fn, (iter_t, x)) E File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_higher_order_ops/while_loop.py", line 123, in while_loop E return while_loop_op(cond_fn, body_fn, carried_inputs, additional_inputs) args = [FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(), dtype=torch.int64))), FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(2, 3, 4))))] enable_io_processing = True initial_env = None msg = ('_set_grad_enabled\n' '\n' 'While executing %while_loop : [num_users=2] = ' 'call_function[target=torch.ops.higher_order.while_loop](args = (%cond_fn_0, ' '%body_fn_0, (%l_args_0_, %l_args_1_), ()), kwargs = {})\n' 'Original traceback:\n' ' File "export/test_hop.py", line 108, in forward\n' ' return op.op(*args)\n' ' File ' '"/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/hop_db.py", ' 'line 148, in simple_while_loop\n' ' return torch._higher_order_ops.while_loop(cond_fn, body_fn, (iter_t, ' 'x))\n' ' File ' '"/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/_higher_order_ops/while_loop.py", ' 'line 123, in while_loop\n' ' return while_loop_op(cond_fn, body_fn, carried_inputs, ' 'additional_inputs)\n') node = while_loop pbar = self = /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/fx/interpreter.py:155: RuntimeError The above exception was the direct cause of the following exception: self = <__main__.TestHOPCPU testMethod=test_retrace_export_while_loop_simple_cpu_float32> args = (), kwargs = {} @wraps(method) def wrapper(self, *args, **kwargs): with policy(): > method(*args, **kwargs) args = () kwargs = {} method = > policy = . at 0x7f275b756670> self = <__main__.TestHOPCPU testMethod=test_retrace_export_while_loop_simple_cpu_float32> /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py:2842: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <__main__.TestHOPCPU testMethod=test_retrace_export_while_loop_simple_cpu_float32> param_kwargs = {'device': 'cpu', 'dtype': torch.float32, 'op': OpInfo(name='while_loop', ref=None, aliases=(), variant_test_name='sim...=True, test_neg_view=True, assert_jit_shape_analysis=False, supports_expanded_weight=False, is_factory_function=False)} @wraps(test) def instantiated_test(self, param_kwargs=param_kwargs): # Sets precision and runs test # Note: precision is reset after the test is run guard_precision = self.precision guard_rel_tol = self.rel_tol try: self._apply_precision_override_for_test(test, param_kwargs) > result = test(self, **param_kwargs) guard_precision = 0 guard_rel_tol = 0 param_kwargs = {'device': 'cpu', 'dtype': torch.float32, 'op': OpInfo(name='while_loop', ref=None, aliases=(), variant_test_name='simple', op=, method_variant=None, inplace_variant=None, operator_variant=None, inplace_operator_variant=None, skips=(), decorators=(), sample_inputs_func=, reference_inputs_func=None, error_inputs_func=None, error_inputs_sparse_func=None, sample_inputs_sparse_coo_func=, sample_inputs_sparse_csr_func=, sample_inputs_sparse_csc_func=, sample_inputs_sparse_bsr_func=, sample_inputs_sparse_bsc_func=, dtypes={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfCUDA={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfROCM={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfHpu={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfXPU={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypes={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypesIfCUDA={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypesIfROCM={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypesIfHpu={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, supports_out=False, supports_autograd=False, supports_gradgrad=False, supports_fwgrad_bwgrad=False, supports_inplace_autograd=False, supports_forward_ad=False, supports_varargs=False, supports_cow_input_no_materialize_forward=True, supports_cow_input_no_materialize_backward=True, skip_cow_input_backward=False, allow_cow_input_materialize_forward=None, allow_cow_input_materialize_backward=None, gradcheck_wrapper= at 0x7f2776b63ee0>, check_batched_grad=False, check_batched_gradgrad=False, check_batched_forward_grad=False, check_inplace_batched_forward_grad=False, gradcheck_nondet_tol=0.0, gradcheck_fast_mode=None, aten_name='while_loop', decomp_aten_name=None, aten_backward_name=None, assert_autodiffed=False, autodiff_nonfusible_nodes=['aten::while_loop'], autodiff_fusible_nodes=[], supports_sparse=False, supports_scripting=True, supports_tracing=True, supports_sparse_csr=False, supports_sparse_csc=False, supports_sparse_bsr=False, supports_sparse_bsc=False, promotes_int_to_float=False, test_conjugated_samples=True, test_neg_view=True, assert_jit_shape_analysis=False, supports_expanded_weight=False, is_factory_function=False)} self = <__main__.TestHOPCPU testMethod=test_retrace_export_while_loop_simple_cpu_float32> test = /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_device_type.py:447: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (<__main__.TestHOPCPU testMethod=test_retrace_export_while_loop_simple_cpu_float32>,) kwargs = {'device': 'cpu', 'dtype': torch.float32, 'op': OpInfo(name='while_loop', ref=None, aliases=(), variant_test_name='sim...=True, test_neg_view=True, assert_jit_shape_analysis=False, supports_expanded_weight=False, is_factory_function=False)} @wraps(fn) def wrapper(*args, **kwargs): if TEST_WITH_TORCHDYNAMO: # noqa: F821 raise unittest.SkipTest(msg) else: > fn(*args, **kwargs) args = (<__main__.TestHOPCPU testMethod=test_retrace_export_while_loop_simple_cpu_float32>,) fn = kwargs = {'device': 'cpu', 'dtype': torch.float32, 'op': OpInfo(name='while_loop', ref=None, aliases=(), variant_test_name='simple', op=, method_variant=None, inplace_variant=None, operator_variant=None, inplace_operator_variant=None, skips=(), decorators=(), sample_inputs_func=, reference_inputs_func=None, error_inputs_func=None, error_inputs_sparse_func=None, sample_inputs_sparse_coo_func=, sample_inputs_sparse_csr_func=, sample_inputs_sparse_csc_func=, sample_inputs_sparse_bsr_func=, sample_inputs_sparse_bsc_func=, dtypes={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfCUDA={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfROCM={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfHpu={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfXPU={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypes={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypesIfCUDA={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypesIfROCM={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypesIfHpu={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, supports_out=False, supports_autograd=False, supports_gradgrad=False, supports_fwgrad_bwgrad=False, supports_inplace_autograd=False, supports_forward_ad=False, supports_varargs=False, supports_cow_input_no_materialize_forward=True, supports_cow_input_no_materialize_backward=True, skip_cow_input_backward=False, allow_cow_input_materialize_forward=None, allow_cow_input_materialize_backward=None, gradcheck_wrapper= at 0x7f2776b63ee0>, check_batched_grad=False, check_batched_gradgrad=False, check_batched_forward_grad=False, check_inplace_batched_forward_grad=False, gradcheck_nondet_tol=0.0, gradcheck_fast_mode=None, aten_name='while_loop', decomp_aten_name=None, aten_backward_name=None, assert_autodiffed=False, autodiff_nonfusible_nodes=['aten::while_loop'], autodiff_fusible_nodes=[], supports_sparse=False, supports_scripting=True, supports_tracing=True, supports_sparse_csr=False, supports_sparse_csc=False, supports_sparse_bsr=False, supports_sparse_bsc=False, promotes_int_to_float=False, test_conjugated_samples=True, test_neg_view=True, assert_jit_shape_analysis=False, supports_expanded_weight=False, is_factory_function=False)} msg = "Policy: we don't run OpInfo tests w/ Dynamo" /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py:1456: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ args = (<__main__.TestHOPCPU testMethod=test_retrace_export_while_loop_simple_cpu_float32>,) kwargs = {'device': 'cpu', 'dtype': torch.float32, 'op': OpInfo(name='while_loop', ref=None, aliases=(), variant_test_name='sim...=True, test_neg_view=True, assert_jit_shape_analysis=False, supports_expanded_weight=False, is_factory_function=False)} tracked_input = TrackedInput(index=0, val=SampleInput(input=3, args=(tensor([[[0.2116, 1.1948, 0.2897, 1.1177], [0.9154, 1.70... [0.9774, 1.7231, 1.8282, 1.3384]]]),), kwargs={}, broadcasts_input=False, name=''), type_desc='sample input') e_tracked = Exception('Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cpu", dtype=torch.int64], args...ce_export_while_loop_simple_cpu_float32\n\nThis message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0') @wraps(test) def test_wrapper(*args, **kwargs): try: return test(*args, **kwargs) except unittest.SkipTest as e: raise e except Exception as e: tracked_input = get_tracked_input() if PRINT_REPRO_ON_FAILURE and tracked_input is not None: e_tracked = Exception( # noqa: TRY002 f"Caused by {tracked_input.type_desc} " f"at index {tracked_input.index}: " f"{_serialize_sample(tracked_input.val)}" ) e_tracked._tracked_input = tracked_input # type: ignore[attr] > raise e_tracked from e E Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cpu", dtype=torch.int64], args=TensorList[Tensor[size=(2, 3, 4), device="cpu", dtype=torch.float32]], kwargs={}, broadcasts_input=False, name='') E E To execute this test, run the following from the base repo dir: E PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/export/test_hop.py -k TestHOPCPU.test_retrace_export_while_loop_simple_cpu_float32 E E This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 args = (<__main__.TestHOPCPU testMethod=test_retrace_export_while_loop_simple_cpu_float32>,) e_tracked = Exception('Caused by sample input at index 0: SampleInput(input=Tensor[size=(), device="cpu", dtype=torch.int64], args=TensorList[Tensor[size=(2, 3, 4), device="cpu", dtype=torch.float32]], kwargs={}, broadcasts_input=False, name=\'\')\n\nTo execute this test, run the following from the base repo dir:\n PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/export/test_hop.py -k TestHOPCPU.test_retrace_export_while_loop_simple_cpu_float32\n\nThis message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0') kwargs = {'device': 'cpu', 'dtype': torch.float32, 'op': OpInfo(name='while_loop', ref=None, aliases=(), variant_test_name='simple', op=, method_variant=None, inplace_variant=None, operator_variant=None, inplace_operator_variant=None, skips=(), decorators=(), sample_inputs_func=, reference_inputs_func=None, error_inputs_func=None, error_inputs_sparse_func=None, sample_inputs_sparse_coo_func=, sample_inputs_sparse_csr_func=, sample_inputs_sparse_csc_func=, sample_inputs_sparse_bsr_func=, sample_inputs_sparse_bsc_func=, dtypes={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfCUDA={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfROCM={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfHpu={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, dtypesIfXPU={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypes={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypesIfCUDA={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypesIfROCM={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, backward_dtypesIfHpu={torch.int32, torch.int64, torch.float16, torch.uint8, torch.bool, torch.float32, torch.int8, torch.float64, torch.int16}, supports_out=False, supports_autograd=False, supports_gradgrad=False, supports_fwgrad_bwgrad=False, supports_inplace_autograd=False, supports_forward_ad=False, supports_varargs=False, supports_cow_input_no_materialize_forward=True, supports_cow_input_no_materialize_backward=True, skip_cow_input_backward=False, allow_cow_input_materialize_forward=None, allow_cow_input_materialize_backward=None, gradcheck_wrapper= at 0x7f2776b63ee0>, check_batched_grad=False, check_batched_gradgrad=False, check_batched_forward_grad=False, check_inplace_batched_forward_grad=False, gradcheck_nondet_tol=0.0, gradcheck_fast_mode=None, aten_name='while_loop', decomp_aten_name=None, aten_backward_name=None, assert_autodiffed=False, autodiff_nonfusible_nodes=['aten::while_loop'], autodiff_fusible_nodes=[], supports_sparse=False, supports_scripting=True, supports_tracing=True, supports_sparse_csr=False, supports_sparse_csc=False, supports_sparse_bsr=False, supports_sparse_bsc=False, promotes_int_to_float=False, test_conjugated_samples=True, test_neg_view=True, assert_jit_shape_analysis=False, supports_expanded_weight=False, is_factory_function=False)} test = tracked_input = TrackedInput(index=0, val=SampleInput(input=3, args=(tensor([[[0.2116, 1.1948, 0.2897, 1.1177], [0.9154, 1.7034, 0.5315, 1.4190], [1.1862, 0.1839, 0.8067, 0.9158]], [[1.5382, 1.1028, 0.8376, 0.7035], [1.9941, 0.4729, 1.1473, 0.5417], [0.9774, 1.7231, 1.8282, 1.3384]]]),), kwargs={}, broadcasts_input=False, name=''), type_desc='sample input') /opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_device_type.py:1139: Exception ```

Test file path: export/test_hop.py

cc @clee2000 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @penguinwu

pytorch-bot[bot] commented 2 months ago
Hello there! From the DISABLED prefix in this issue title, it looks like you are attempting to disable a test in PyTorch CI. The information I have parsed is below: * Test name: `test_retrace_export_while_loop_simple_cpu_float32 (__main__.TestHOPCPU)` * Platforms for which to skip the test: linux * Disabled by `pytorch-bot[bot]` Within ~15 minutes, `test_retrace_export_while_loop_simple_cpu_float32 (__main__.TestHOPCPU)` will be disabled in PyTorch CI for these platforms: linux. Please verify that your test name looks correct, e.g., `test_cuda_assert_async (__main__.TestCuda)`. To modify the platforms list, please include a line in the issue body, like below. The default action will disable the test for all platforms if no platforms list is specified. ``` Platforms: case-insensitive, list, of, platforms ``` We currently support the following platforms: asan, dynamo, inductor, linux, mac, macos, rocm, slow, win, windows. ### How to re-enable a test To re-enable the test globally, close the issue. To re-enable a test for only a subset of platforms, remove the platforms from the list in the issue body. This may take some time to propagate. To re-enable a test only for a PR, put `Fixes #131768` in the PR body and rerun the test jobs. Note that if a test is flaky, it maybe be difficult to tell if the test is still flaky on the PR.
pytorch-bot[bot] commented 2 months ago

Another case of trunk flakiness has been found here. The list of platforms [linux] appears to contain all the recently affected platforms [linux]. Either the change didn't propogate fast enough or disable bot might be broken.

pytorch-bot[bot] commented 2 months ago

Another case of trunk flakiness has been found here. The list of platforms [linux] appears to contain all the recently affected platforms [linux]. Either the change didn't propogate fast enough or disable bot might be broken.

pytorch-bot[bot] commented 1 month ago

Resolving the issue because the test is not flaky anymore after 1005 reruns without any failures and the issue hasn't been updated in 14 days. Please reopen the issue to re-disable the test if you think this is a false positive