Open mengfei25 opened 3 months ago
low priority for not included in Meta PyTorch dashboard
A100 amp and fp32 pass, bf16 and fp16 failed
latest issue is caused by fbgemm component. @weishi-deng is this model issue related to model script.
loading model: 0it [00:00, ?it/s]
xpu train torchrec_dlrm
Traceback (most recent call last):
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", line 4813, in run
) = runner.load_model(
File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/torchbench.py", line 243, in load_model
module = importlib.import_module(c)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/sdp/actions-runner/_work/torch-xpu-ops/benchmark/torchbenchmark/canary_models/torchrec_dlrm/__init__.py", line 7, in <module>
from .data.dlrm_dataloader import get_dataloader
File "/home/sdp/actions-runner/_work/torch-xpu-ops/benchmark/torchbenchmark/canary_models/torchrec_dlrm/data/dlrm_dataloader.py", line 13, in <module>
from torchrec.datasets.criteo import (
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torchrec/__init__.py", line 10, in <module>
import torchrec.distributed # noqa
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torchrec/distributed/__init__.py", line 39, in <module>
from torchrec.distributed.train_pipeline import ( # noqa
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torchrec/distributed/train_pipeline/__init__.py", line 11, in <module>
from torchrec.distributed.train_pipeline.train_pipelines import ( # noqa
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torchrec/distributed/train_pipeline/train_pipelines.py", line 78, in <module>
torch.ops.import_module("fbgemm_gpu.sparse_ops")
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_ops.py", line 1329, in import_module
importlib.import_module(module)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/fbgemm_gpu/sparse_ops.py", line 1167, in <module>
_setup()
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/fbgemm_gpu/sparse_ops.py", line 1039, in _setup
impl_autograd(
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/fbgemm_gpu/sparse_ops.py", line 1036, in impl_autograd
torch.library.register_autograd(op_name, fn, setup_context=setup_context)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/library.py", line 937, in register_autograd
op = torch._library.utils.lookup_op(qualname)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_library/utils.py", line 77, in lookup_op
packet = getattr(ns, name)
File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_ops.py", line 1232, in __getattr__
raise AttributeError(
AttributeError: '_OpNamespace' 'fbgemm' object has no attribute 'permute_2D_sparse_data'
eager_fail_to_run
@retonym looks similar to this env issue https://github.com/pytorch/torchrec/issues/524
🐛 Describe the bug
torchbench_amp_bf16_training xpu train torchrec_dlrm
ERROR:common: Traceback (most recent call last): File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", line 2846, in check_accuracy new_result = optimized_model_iter_fn(model_copy, example_inputs) File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 464, in _fn return fn(*args, kwargs) File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/common.py", line 2550, in run_n_iterations self.model_iter_fn(mod, inputs, collect_outputs=False) File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/torchbench.py", line 442, in forward_and_backward_pass cloned_inputs = clone_inputs(inputs) File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/torchbench.py", line 443, in torch_dynamo_resume_in_forward_and_backward_pass_at_442 self.optimizer_zero_grad(mod) File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/torchbench.py", line 449, in torch_dynamo_resume_in_forward_and_backward_pass_at_443 loss = self.compute_loss(pred) File "/home/sdp/actions-runner/_work/torch-xpu-ops/pytorch/benchmarks/dynamo/torchbench.py", line 450, in torch_dynamo_resume_in_forward_and_backward_pass_at_449 self.grad_scaler.scale(loss).backward() File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_tensor.py", line 522, in backward torch.autograd.backward( File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/autograd/init.py", line 346, in backward _engine_run_backward( File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/autograd/graph.py", line 812, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/autograd/function.py", line 306, in apply return user_fn(self, args) File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 2010, in backward out = call_compiled_backward() File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1949, in call_compiled_backward out = call_func_at_runtime_with_args( File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/utils.py", line 121, in call_func_at_runtime_with_args out = normalize_as_list(f(args)) File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 631, in _fn return fn(args, kwargs) File "/home/sdp/miniforge3/envs/e2e_ci/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 1412, in call return self.current_callable(inputs) File "/tmp/torchinductor_sdp/a6/ca6sk7xahbchwklbcjwffotjtdv2ybs6rhkftxaupkguciso5cel.py", line 1697, in call assert_size_stride(getitem_2, (5, ), (1, )) AssertionError: expected size 4==5, stride 1==1 at dim=0 TorchDynamo optimized model failed to run because of following error fail_to_run
loading model: 0it [00:00, ?it/s] loading model: 0it [00:07, ?it/s]
Versions
torch-xpu-ops: https://github.com/intel/torch-xpu-ops/commit/1d70431c072db889d9a47ea4956049fe340a426d pytorch: d224857b3af5c9d5a3c7a48401475c09d90db296 device: pvc 1100, bundle: 0.5.3, driver: 803.61