csarofeen / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
http://pytorch.org
Other
26 stars 7 forks source link

Get Segmentation fault when I compile the following FusionDefinition #2582

Open ftxj opened 1 year ago

ftxj commented 1 year ago

🐛 Describe the bug

This FD is got when I run the DGL/GAT model. Notice that I just use the master branch of this repo.

        inputs = [
            torch.randn(5, 5, device='cuda').unsqueeze(1).expand((5, 3, 5)),
            torch.randn(3, 5, device='cuda').unsqueeze(0).expand((5, 3, 5)),
            torch.randn(5, 5, device='cuda').unsqueeze(1).expand((5, 3, 5)),
            torch.randn(5, 5, device='cuda').unsqueeze(1).expand((5, 3, 5)),
            torch.randn(3, 5, device='cuda').unsqueeze(0).expand((5, 3, 5)),
            torch.randn(5, 15, device='cuda')
        ]

        def fusion_func(fd : FusionDefinition) -> None :
            T0 = fd.define_tensor(symbolic_sizes=[-1, -1, -1], contiguous=[True, None, True], dtype=DataType.Float, is_cpu=False)
            T1 = fd.define_tensor(symbolic_sizes=[-1, -1, -1], contiguous=[None, True, True], dtype=DataType.Float, is_cpu=False)
            T2 = fd.define_tensor(symbolic_sizes=[-1, -1, -1], contiguous=[True, True, True], dtype=DataType.Float, is_cpu=False)
            T3 = fd.define_tensor(symbolic_sizes=[-1, -1, -1], contiguous=[True, None, True], dtype=DataType.Float, is_cpu=False)
            T4 = fd.define_tensor(symbolic_sizes=[-1, -1, -1], contiguous=[None, True, True], dtype=DataType.Float, is_cpu=False)
            T5 = fd.define_tensor(symbolic_sizes=[-1, -1], contiguous=[True, True], dtype=DataType.Float, is_cpu=False)

            T6 = fd.ops.mul(T0, T1)
            T8 = fd.ops.mul(T3, T4)
            T9 = fd.ops.mul(T3, T2)

            T11 = fd.ops.add(T6, T8)

            T12 = fd.ops.sum(T9, axes=[0], keepdim=False, dtype=DataType.Null)
            T13 = fd.ops.reshape(T11, original_shape=[5, 3, 5], new_shape=[5, 15])

            T14 = fd.ops.add(T5, T13)

            fd.add_output(T12)
            fd.add_output(T14)

Versions

Collecting environment information...

PyTorch version: 1.12.0a0+2c916ef Is debug build: False CUDA used to build PyTorch: 11.6 ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0 Clang version: Could not collect CMake version: version 3.22.3 Libc version: glibc-2.31

Python version: 3.8.12 | packaged by conda-forge | (default, Jan 30 2022, 23:42:07) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-4.15.0-201-generic-x86_64-with-glibc2.10 Is CUDA available: True CUDA runtime version: 11.6.112 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080 Ti Nvidia driver version: 525.85.05 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.3 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.3 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 48 On-line CPU(s) list: 0-47 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD Ryzen Threadripper 3960X 24-Core Processor Stepping: 0 Frequency boost: enabled CPU MHz: 2192.588 CPU max MHz: 3800.0000 CPU min MHz: 2200.0000 BogoMIPS: 7585.58 Virtualization: AMD-V L1d cache: 768 KiB L1i cache: 768 KiB L2 cache: 12 MiB L3 cache: 128 MiB NUMA node0 CPU(s): 0-47 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate sme ssbd ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca

Versions of relevant libraries: [pip3] numpy==1.22.3 [pip3] pytorch-quantization==2.1.2 [pip3] torch==1.12.0a0+2c916ef [pip3] torch-tensorrt==1.1.0a0 [pip3] torchtext==0.12.0a0 [pip3] torchvision==0.13.0a0 [conda] magma-cuda110 2.5.2 5 local [conda] mkl 2019.5 281 conda-forge [conda] mkl-include 2019.5 281 conda-forge [conda] numpy 1.22.3 py38h05e7239_0 conda-forge [conda] pytorch-quantization 2.1.2 pypi_0 pypi [conda] torch 1.12.0a0+2c916ef pypi_0 pypi [conda] torch-tensorrt 1.1.0a0 pypi_0 pypi [conda] torchtext 0.12.0a0 pypi_0 pypi [conda] torchvision 0.13.0a0 pypi_0 pypi

naoyam commented 1 year ago

Can you please post this issue to the new repo? (and close this one here)

csarofeen commented 1 year ago

https://github.com/nvidia/fuser