intel / torch-xpu-ops

Apache License 2.0
23 stars 15 forks source link

[E2E] HF amp_fp16 and amp_bf16 training accuracy have 2 new models crash issue #699

Open chuanqi129 opened 1 month ago

chuanqi129 commented 1 month ago

🐛 Describe the bug

According to the latest weekly tests, there are 2 models crash issue in AMP_FP16 and AMP_BF16 training accuracy tests. Refer https://github.com/intel/torch-xpu-ops/actions/runs/10278413083/job/28442046973

Model List:

Failures log:

xpu  train BartForConditionalGeneration       
ERROR:common:
Traceback (most recent call last):
  File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", line 2846, in check_accuracy
    new_result = optimized_model_iter_fn(model_copy, example_inputs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 464, in _fn
    return fn(*args, **kwargs)
  File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", line 2550, in run_n_iterations
    self.model_iter_fn(mod, inputs, collect_outputs=False)
  File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/huggingface.py", line 521, in forward_and_backward_pass
    cloned_inputs = clone_inputs(inputs)
  File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/huggingface.py", line 522, in torch_dynamo_resume_in_forward_and_backward_pass_at_521
    self.optimizer_zero_grad(mod)
  File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/huggingface.py", line 524, in torch_dynamo_resume_in_forward_and_backward_pass_at_522
    pred = mod(**cloned_inputs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1731, in forward
    outputs = self.model(
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1599, in forward
    encoder_outputs = self.encoder(
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 1207, in forward
    layer_outputs = encoder_layer(
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/transformers/models/bart/modeling_bart.py", line 643, in forward
    def forward(
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 631, in _fn
    return fn(*args, **kwargs)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 1048, in forward
    return compiled_fn(full_args)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 316, in runtime_wrapper
    all_outs = call_func_at_runtime_with_args(
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 121, in call_func_at_runtime_with_args
    out = normalize_as_list(f(args))
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 95, in g
    return f(*args)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/autograd/function.py", line 574, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1503, in forward
    fw_outs = call_func_at_runtime_with_args(
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 121, in call_func_at_runtime_with_args
    out = normalize_as_list(f(args))
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 500, in wrapper
    return compiled_fn(runtime_args)
  File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1412, in __call__
    return self.current_callable(inputs)
  File "/tmp/torchinductor_sdp/fe/cfe4hqo23icogcgdufcrcpz5dmijb5oev2xdml6mhkz5adosc4wo.py", line 679, in call
    extern_kernels.addmm(buf20, reinterpret_tensor(buf18, (1024, 1024), (1024, 1), 0), reinterpret_tensor(buf19, (1024, 1024), (1, 1024), 0), alpha=1, beta=1, out=buf21)
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::BFloat16
TorchDynamo optimized model failed to run because of following error
fail_to_run

Versions

Failure On-demand Test on 2024-08-07, See: https://github.com/intel/torch-xpu-ops/actions/runs/10278413083

Torch-xpu-ops PyTorch Triton
fb365ac on main 527f104 on HEAD 1b2f158
Transformers Timm Torchbench Torchvision Torchaudio
243e186 730b907 03cde49 d23a6e1 b3f6f51
Device OS GCC Python Driver(DKMS) Bundle(DPCPP)
pytorch-06 Ubuntu 22.04.2 LTS 11 3.10 1.23.10.49.231129.50 2024.1.3.20240604
Inputs huggingface/amp_bf16,amp_fp16/training/accuracy
retonym commented 1 month ago

This issue is related to the wrong datatype infer in sdp kernel. Will investigate further.

retonym commented 1 month ago

The issue is related to pytorch main regression. This issue also occurs in cuda, if forcing to use math sdp.

chuanqi129 commented 3 weeks ago

@retonym will create a pytorch issue to track it

retonym commented 3 weeks ago

PyTorch issue for this crash: https://github.com/pytorch/pytorch/issues/133974