intel / torch-xpu-ops

Apache License 2.0
24 stars 17 forks source link

[To Evaluate] Gaps for test_torch.py #302

Closed daisyden closed 2 months ago

daisyden commented 4 months ago

🐛 Describe the bug

  1. To enable memory check in test framework, we need to have the counterpart of the two cuda functions: torch.cuda.memory_allocated() torch.cuda.mem_get_info()

  2. To enable CudaSyncGuard in test framework, we depend on the counterpart of torch.cuda.set_sync_debug_mode

  3. To enable largeTensorTest in test framework, we depend on the counterpart of torch.cuda.memory.mem_get_info

  4. To run test_storage_meta_errors() need torch.TypedStorage.xpu support

  5. To run test_dtypetensor_warnings need the counterpart of torch.cuda.FloatTensor and torch.cuda.DoubleTensor support in xpu backend.

  6. float() need to have an is_xpu() interface, test_broadcast()

    small = torch.randn(*dims_small, device=device).float()
    if small.is_cuda and fn in ['map', 'map2']:
      # map and map2 are not implementd on CUDA tensors
      return

image

  1. xpu is missed in torch.backends, see https://pytorch.org/docs/stable/backends.html. So that we cannot write a counterpart of this: @unittest.skipIf(torch.backends.cuda.is_built() or IS_SANDCASTLE, "CUDA is built, can't test CUDA not built error")

8 AttributeError: module 'torch.xpu' has no attribute 'FloatTensor' image

image

image

9 Error message related AssertionError: RuntimeError not raised by

10, floating point exception in index_add, test_dim_function_empty_xpu image

image

  1. AssertionError: 'XPU.used\t\t true' not found in '[TORCH_VITAL] Dataloader.enabled\t\t True\n[TORCH_VITAL] Dataloader.basic_unit_test\t\t TEST_VALUE_STRING\n[TORCH_VITAL] CUDA.used\t\t False\n' image

The following issues are in TestTorchDeviceType class:

  1. Error #0: totally 11 RuntimeError: expected scalar type Long but found Int

    image

  2. Error #5: totally 2 AssertionError: Scalars are not equal!

    image

  3. Error #7: totally 1 TypeError: map2_ is only implemented on CPU tensors

    Error #8: totally 1 TypeError: map_ is only implemented on CPU tensors

image

  1. Error #10: totally 1 AssertionError: True is not false

    image

  2. Error #11: totally 1 AssertionError: tensor(False, device='xpu:0') is not true

    image

  3. Error #12: totally 2 AttributeError: module 'torch.xpu' has no attribute 'amp'

image

  1. Error #13: totally 3 NotImplementedError: Could not run 'aten::_copy_from_and_resize' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_copy_from_and_resize' is only available for these backends: [XPU, Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

Error #14: totally 2 NotImplementedError: Could not run 'aten::_sparse_coo_tensor_with_dims_and_tensors' with arguments from the 'SparseXPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_sparse_coo_tensor_with_dims_and_tensors' is only available for these backends: [XPU, Meta, SparseCPU, SparseMeta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

image

  1. Error #15: totally 2 AssertionError: Tensor-likes are not close!

    image

  2. Error #16: totally 1 RuntimeError: unsupported operation: more than one element of the written-to tensor refers to a single memory location. Please clone() the tensor before performing the operation.

image

  1. Error #17: totally 2 AssertionError: False is not true

image

  1. Error #18: totally 2 AssertionError: Torch not compiled with CUDA enabled

    image

  2. Error #19: totally 1 RuntimeError: _sharefd: only available on CPU

    image

  3. Error #20: totally 3 RuntimeError: Expected a 'cpu' device type for generator but found 'xpu'

image

  1. Error #22: totally 1 AssertionError: "Expected all tensors to be on the same device" does not match "multinomial expects Long tensor out, got: Float"

    image

  2. Error #23: totally 26 AssertionError: RuntimeError not raised : expected a non-deterministic error, but it was not raised

image

  1. Error #24: totally 1 AttributeError: 'TestTorchDeviceTypeXPU' object has no attribute 'check_device_nondeterministic_alert'

image

  1. Error #25: totally 2 RuntimeError: "max_unpool2d" not implemented for 'Half'

image

  1. Error #26: totally 1 RuntimeError: "max_unpool3d" not implemented for 'Half'

Error #27: totally 1 AssertionError: RuntimeError not raised

image

  1. Error #29: totally 1 AssertionError: "unsupported operation" does not match ""lshift_cpu" not implemented for 'Float'"

    image

  2. Missing attribute:

    Error #30: totally 2 AttributeError: module 'torch.xpu' has no attribute 'BoolStorage'

    Error #31: totally 2 AttributeError: module 'torch.xpu' has no attribute 'ComplexDoubleStorage'

    Error #32: totally 2 AttributeError: module 'torch.xpu' has no attribute 'ComplexFloatStorage'

    Error #33: totally 2 AttributeError: module 'torch.xpu' has no attribute 'DoubleStorage'

    Error #34: totally 2 AttributeError: module 'torch.xpu' has no attribute 'ShortStorage'

    Error #35: totally 2 AttributeError: module 'torch.xpu' has no attribute 'IntStorage'

    Error #36: totally 2 AttributeError: module 'torch.xpu' has no attribute 'LongStorage'

    Error #37: totally 2 AttributeError: module 'torch.xpu' has no attribute 'CharStorage'

    Error #38: totally 1 AttributeError: module 'torch.xpu' has no attribute 'BFloat16Storage'

    Error #39: totally 1 AttributeError: module 'torch.xpu' has no attribute 'HalfStorage'

image

----- End of Summary -----

Versions

Collecting environment information... PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: 14.0.0-1ubuntu1.1 CMake version: version 3.22.1 Libc version: glibc-2.35

Python version: 3.12.2 | packaged by Anaconda, Inc. | (main, Feb 27 2024, 17:35:02) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-106-generic-x86_64-with-glibc2.35 Is CUDA available: N/A CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 52 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 224 On-line CPU(s) list: 0-223 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8480+ CPU family: 6 Model: 143 Thread(s) per core: 2 Core(s) per socket: 56 Socket(s): 2 Stepping: 8 CPU max MHz: 3800.0000 CPU min MHz: 800.0000 BogoMIPS: 4000.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 5.3 MiB (112 instances) L1i cache: 3.5 MiB (112 instances) L2 cache: 224 MiB (112 instances) L3 cache: 210 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-55,112-167 NUMA node1 CPU(s): 56-111,168-223 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

Versions of relevant libraries: [pip3] No relevant packages [conda] No relevant packages

daisyden commented 3 months ago

10, indexadd does not handle index.numel()==0, more investigation is WIP.

daisyden commented 3 months ago

test_tensor_set_errors is passed

daisyden commented 3 months ago
  1. test_torch_xpu.py::TestTorch::test_index_add - RuntimeError: expected scalar type Long but found Int When index dtype is int32, as the template of getTensorInfo assumed int64 image

The TensorInfo will cause the error: https://github.com/intel/torch-xpu-ops/blob/main/src/aten/sycl/Indexing.cpp#L371 https://github.com/intel/torch-xpu-ops/blob/main/src/comm/TensorInfo.h#L193