pytorch / executorch

On-device AI across mobile, embedded and edge for PyTorch
https://pytorch.org/executorch/
Other
1.69k stars 288 forks source link

error when "call_function:aten.copy.default" can not be lowered to xnnpack delegate #4475

Open TaylorYangX opened 1 month ago

TaylorYangX commented 1 month ago

🐛 Describe the bug

when I use executorch to lower my transformer-based model to xnnpack backend.I meet the error

   INFO:executorch.backends.xnnpack.partition.xnnpack_partitioner:Found 35 subgraphs to be partitioned.
   Traceback (most recent call last):
     File "/home/xnntest/secondtest.py", line 33, in <module>
       edge_manager = edge_manager.to_backend(XnnpackPartitioner())
     File "/home/xnntest/executorch/exir/program/_program.py", line 1166, in to_backend
       new_edge_programs[name] = to_backend(program, partitioner)
     File "/home/miniconda3/envs/xnn/lib/python3.10/functools.py", line 878, in wrapper
       return dispatch(args[0].__class__)(*args, **kw)
     File "/home/xnntest/executorch/exir/backend/backend_api.py", line 384, in _
       tagged_graph_module = _partition_and_lower(
     File "/home/xnntest/executorch/exir/backend/backend_api.py", line 299, in _partition_and_lower
       partitioned_module = _partition_and_lower_one_graph_module(
     File "/home/xnntest/executorch/exir/backend/backend_api.py", line 230, in _partition_and_lower_one_graph_module
       lowered_submodule = to_backend(
     File "/home/miniconda3/envs/xnn/lib/python3.10/functools.py", line 878, in wrapper
       return dispatch(args[0].__class__)(*args, **kw)
     File "/home/xnntest/executorch/exir/backend/backend_api.py", line 114, in _
       preprocess_result: PreprocessResult = cls.preprocess(
     File "/home/xnntest/executorch/backends/xnnpack/xnnpack_preprocess.py", line 155, in preprocess
       raise RuntimeError(
   RuntimeError: For aten_copy_default, call_function:aten.copy.default is not supported in XNNPACK Delegate

follow is my source code for the convertion.

I am converting this model https://github.com/thuml/Anomaly-Transformer/tree/main/model to executorch I have changed the operation .cuda() to .cpu().So it won't be the problem of cuda.

   from model.AnomalyTransformer import AnomalyTransformer

   model = AnomalyTransformer(win_size=100, enc_in=51, c_out=51, e_layers=3)
   model.load_state_dict(torch.load('2SWaT_checkpoint.pth', weights_only=True))#map_location=torch.device('cpu')
   model.eval()
   sample_inputs = (torch.randn(2,100,51),)
   def quantize(model, example_inputs):
       """This is the official recommended flow for quantization in pytorch 2.0 export"""
       print(f"Original model: {model}")
       quantizer = XNNPACKQuantizer()
       operator_config = get_symmetric_quantization_config(is_per_channel=False)
       quantizer.set_global(operator_config)
       m = prepare_pt2e(model, quantizer)
       m(*example_inputs)
       m = convert_pt2e(m)
       print(f"Quantized model: {m}")
       return m

   quan_model = quantize(model, sample_inputs)

   # Continued from earlier...
   edge = to_edge(export(quan_model, sample_inputs), compile_config=EdgeCompileConfig(_check_ir_validity=False))

   edge = edge.to_backend(XnnpackPartitioner())

   exec_prog = edge.to_executorch()

   with open("20swat.pte", "wb") as file:
    exec_prog.write_to_file(file)

Could you please help me?

Versions

PyTorch version: 2.4.0+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A

OS: Red Hat Enterprise Linux 9.2 (Plow) (x86_64) GCC version: (GCC) 13.1.0 Clang version: Could not collect CMake version: version 3.30.1 Libc version: glibc-2.34

Python version: 3.10.0 | packaged by conda-forge | (default, Nov 20 2021, 02:24:10) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.14.0-284.11.1.el9_2.x86_64-x86_64-with-glibc2.34 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz CPU family: 6 Model: 106 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 Stepping: 6 BogoMIPS: 4800.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities L1d cache: 1.5 MiB (32 instances) L1i cache: 1 MiB (32 instances) L2 cache: 40 MiB (32 instances) L3 cache: 48 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

Versions of relevant libraries: [pip3] executorch==0.3.0a0+7d77d78 [pip3] numpy==2.0.1 [pip3] torch==2.4.0+cpu [pip3] torchaudio==2.4.0+cpu [pip3] torchsr==1.0.4 [pip3] torchvision==0.19.0+cpu [conda] executorch 0.3.0a0+7d77d78 pypi_0 pypi [conda] numpy 2.0.1 pypi_0 pypi [conda] torch 2.4.0+cpu pypi_0 pypi [conda] torchaudio 2.4.0+cpu pypi_0 pypi [conda] torchsr 1.0.4 pypi_0 pypi [conda] torchvision 0.19.0+cpu pypi_0 pypi

sqxsss commented 1 month ago

Hi, I also have a similar problem when I try to use .to_backend(XnnpackPartitioner()) to lower my model. It actually works if I don't call the to_backend API. I'm sorry if my question looks too naive since I'm new to executorch! Hope you can help me! Thank you so much!

mcr229 commented 1 month ago

Hi yes, it looks like there is an issue with the partitioner at the moment, Does your model have SDPA, upsample_bilinear2d in the model? I suspect this is likely a problem with us trying to recompose these ops.

I have a new partitioner in the works that can hopefully resolve such issues in the future, I am aiming to land this soon.

TaylorYangX commented 1 month ago

@mcr229 thank you for your reply! This is the link https://github.com/thuml/Anomaly-Transformer/tree/main/model of the model. I think the model haven't SDPA,upsample_bilinear2d.

TaylorYangX commented 1 month ago

“aten.copy_.default” seems to be a copy of the tensor. When I tried to add the copy operation in a simple model, xnnpack was able to support it.

# Start with a PyTorch model that adds two input tensors (matrices)
class Add(torch.nn.Module):
  def __init__(self):
    super(Add, self).__init__()

  def forward(self, x: torch.Tensor, y: torch.Tensor):
      a = torch.tensor([1, 2, 3])
      b = torch.empty_like(a)
      b.copy_(a)

      return b

sample_inputs = (torch.ones(1), torch.ones(1))

mobilenet_v2 = capture_pre_autograd_graph(Add(), sample_inputs) # 2-stage export for quantization path

from torch.ao.quantization.quantize_pt2e import convert_pt2e, prepare_pt2e
from torch.ao.quantization.quantizer.xnnpack_quantizer import (
    get_symmetric_quantization_config,
    XNNPACKQuantizer,
)

def quantize(model, example_inputs):
    """This is the official recommended flow for quantization in pytorch 2.0 export"""
    print(f"Original model: {model}")
    quantizer = XNNPACKQuantizer()
    # if we set is_per_channel to True, we also need to add out_variant of quantize_per_channel/dequantize_per_channel
    operator_config = get_symmetric_quantization_config(is_per_channel=False)
    quantizer.set_global(operator_config)
    m = prepare_pt2e(model, quantizer)
    # calibration
    m(*example_inputs)
    m = convert_pt2e(m)
    print(f"Quantized model: {m}")
    # make sure we can export to flat buffer
    return m

quantized_mobilenetv2 = quantize(mobilenet_v2, sample_inputs)

# Continued from earlier...
edge = to_edge(export(quantized_mobilenetv2, sample_inputs), compile_config=EdgeCompileConfig(_check_ir_validity=False))

edge = edge.to_backend(XnnpackPartitioner())

exec_prog = edge.to_executorch()

with open("22test.pte", "wb") as file:
    exec_prog.write_to_file(file)

The result of running is that xnnpack supports this operation

Original model: GraphModule()

def forward(self, x, y):
    arg0, arg1, = fx_pytree.tree_flatten_spec(([x, y], {}), self._in_spec)
    _tensor_constant0 = self._tensor_constant0
    lift_fresh_copy = torch.ops.aten.lift_fresh_copy.default(_tensor_constant0);  _tensor_constant0 = None
    detach_ = torch.ops.aten.detach_.default(lift_fresh_copy);  lift_fresh_copy = None
    empty_like = torch.ops.aten.empty_like.default(detach_, pin_memory = False)
    copy_ = torch.ops.aten.copy_.default(empty_like, detach_);  empty_like = detach_ = None
    return pytree.tree_unflatten([copy_], self._out_spec)

# To see more debug info, please use `graph_module.print_readable()`
Quantized model: GraphModule()

def forward(self, x, y):
    arg0, arg1, = fx_pytree.tree_flatten_spec(([x, y], {}), self._in_spec)
    _tensor_constant0 = self._tensor_constant0
    lift_fresh_copy = torch.ops.aten.lift_fresh_copy.default(_tensor_constant0);  _tensor_constant0 = None
    detach_ = torch.ops.aten.detach_.default(lift_fresh_copy);  lift_fresh_copy = None
    empty_like = torch.ops.aten.empty_like.default(detach_, pin_memory = False)
    copy_ = torch.ops.aten.copy_.default(empty_like, detach_);  empty_like = detach_ = None
    return pytree.tree_unflatten([copy_], self._out_spec)

# To see more debug info, please use `graph_module.print_readable()`
WARNING:executorch.backends.xnnpack.partition.xnnpack_partitioner:Nothing can be partitioned!

Is this a model definition that needs to be modified? @mcr229

mcr229 commented 1 month ago

Nothing in this model is being delegated. XNNPACK Delegate doesn't support any of the ops listed (XNNPACK generally targets compute intensive operations), so it isn't being delegated.

the copy you are seeing is likely coming from a module. i.e. something like Linear, Bilinear, SDPA are being broken down into smaller operations which may contain Copy. And internally, there may be an issue with recomposing these ops back into the original, which is why we err with an unsupported op.