intel / intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Apache License 2.0
1.55k stars 236 forks source link

NotImplementedError: Could not run 'aten::empty_strided' #619

Closed Delaunay closed 3 months ago

Delaunay commented 4 months ago

Describe the bug

Missing aten operator for multi-gpu training.

from argparse import Namespace
from timm.models import create_model
from timm import utils

model = create_model(
    "resnet152",
    pretrained=False,
    in_chans=3,
    num_classes=100,
    drop_rate=0.0,
    drop_path_rate=None,
    drop_block_rate=None,
    global_pool=None,
    bn_momentum=None,
    bn_eps=None,
    scriptable=False,
    checkpoint_path='',
)

args = Namespace(device='xpu', distributed=True, world_size=16, rank=0, local_rank=0)
device = utils.init_distributed_device(args)
model.to(device=device)

Exception

davit_large-multi.0 [stderr] Traceback (most recent call last):
davit_large-multi.0 [stderr]   File "/home/sdp/results/venv/torch/bin/voir", line 8, in <module>
davit_large-multi.0 [stderr]     sys.exit(main())
davit_large-multi.0 [stderr]   File "/home/sdp/voir/voir/cli.py", line 124, in main
davit_large-multi.0 [stderr]     ov(sys.argv[1:] if argv is None else argv)
davit_large-multi.0 [stderr]   File "/home/sdp/voir/voir/phase.py", line 334, in __call__
davit_large-multi.0 [stderr]     self._run(*args, **kwargs)
davit_large-multi.0 [stderr]   File "/home/sdp/voir/voir/overseer.py", line 242, in _run
davit_large-multi.0 [stderr]     set_value(func())
davit_large-multi.0 [stderr]   File "/home/sdp/voir/voir/scriptutils.py", line 37, in <lambda>
davit_large-multi.0 [stderr]     return lambda: exec(mainsection, glb, glb)
davit_large-multi.0 [stderr]   File "/home/sdp/milabench/benchmarks/timm/pytorch-image-models/train.py", line 1179, in <module>
davit_large-multi.0 [stderr]     main()
davit_large-multi.0 [stderr]   File "/home/sdp/milabench/benchmarks/timm/pytorch-image-models/train.py", line 518, in main
davit_large-multi.0 [stderr]     [model.to](http://model.to/)(device=device)
davit_large-multi.0 [stderr]   File "/home/sdp/results/venv/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1160, in to
davit_large-multi.0 [stderr]     return self._apply(convert)
davit_large-multi.0 [stderr]   File "/home/sdp/results/venv/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
davit_large-multi.0 [stderr]     module._apply(fn)
davit_large-multi.0 [stderr]   File "/home/sdp/results/venv/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 810, in _apply
davit_large-multi.0 [stderr]     module._apply(fn)
davit_large-multi.0 [stderr]   File "/home/sdp/results/venv/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 833, in _apply
davit_large-multi.0 [stderr]     param_applied = fn(param)
davit_large-multi.0 [stderr]   File "/home/sdp/results/venv/torch/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1158, in convert
davit_large-multi.0 [stderr]     return [t.to](http://t.to/)(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
davit_large-multi.0 [stderr] NotImplementedError: Could not run 'aten::empty_strided' with arguments from the 'XPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

Versions

Unfortunately, I do not have access to the machine anymore

Delaunay commented 4 months ago

It seems the issue might be fixed as per https://github.com/intel/intel-extension-for-pytorch/issues/352

feng-intel commented 4 months ago

Could you try our latest xpu version to check and update here, and give your platform info and torch/ipex version? Thanks.

feng-intel commented 3 months ago

No response for 2 weeks. Close the issue.