pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
84.42k stars 22.73k forks source link

[Inductor] Different results with Conv2d and BN2d not in `eval mode` #141317

Open shaoyuyoung opened 2 days ago

shaoyuyoung commented 2 days ago

🐛 Describe the bug

If I use Inductor to compile Conv2d and BN2d not use eval mode. The results are inconsistent. The problem seems to be with BN2d

import torch
import torch.nn as nn

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv = nn.Conv2d(3, 16, kernel_size=3, padding=2, dilation=2, groups=1)
        self.bn = nn.BatchNorm2d(16)
        self.global_avg_pool = nn.AdaptiveAvgPool2d((1, 1))

    def forward(self, x):
        x = self.conv(x)
        x = self.bn(x)
        x = self.global_avg_pool(x)
        return x

m = Model()

x = torch.randn(1, 3, 128, 128)  # With the increase of height and width, the error becomes more obvious

output = m(x)
compiled_model = torch.compile(m)

c_output = compiled_model(x)

print(output)
print(c_output)
res = torch.allclose(output, c_output)
print(res)
click here to see the err log ``` tensor([[[[-8.6147e-09]], [[ 9.8953e-10]], [[-3.4925e-10]], [[ 4.0745e-09]], [[-3.9581e-09]], [[-1.3970e-09]], [[ 1.2107e-08]], [[ 4.6566e-10]], [[-2.2119e-09]], [[ 5.5879e-09]], [[ 9.5170e-09]], [[-6.9849e-10]], [[ 1.1525e-08]], [[-1.6298e-09]], [[-2.1537e-09]], [[ 5.1223e-09]]]], grad_fn=) ``` ``` tensor([[[[ 2.1268e-08]], [[ 5.0095e-08]], [[ 4.3936e-08]], [[-1.0681e-07]], [[-5.3694e-07]], [[-1.0622e-07]], [[-1.0726e-06]], [[-9.5417e-08]], [[ 4.2513e-07]], [[-8.6298e-08]], [[ 2.0579e-07]], [[ 2.8708e-08]], [[ 1.4902e-07]], [[-5.8193e-08]], [[-1.7331e-07]], [[-2.9289e-08]]]], grad_fn=) ``` ``` False ```

If there is a problem with my use, feel free to let me know :)

Versions

click here to view the version env Collecting environment information... PyTorch version: 2.6.0.dev20241115+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: 16.0.1 CMake version: version 3.26.0 Libc version: glibc-2.31 Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:27:36) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.4.0-200-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 12.6.68 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB GPU 1: Tesla V100-SXM2-32GB GPU 2: Tesla V100-SXM2-32GB GPU 3: Tesla V100-SXM2-32GB Nvidia driver version: 560.35.03 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.9.4.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.4.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.4.0 /usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.4.0 /usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.4.0 /usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.4.0 /usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.4.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.4.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 40 bits physical, 48 bits virtual CPU(s): 20 On-line CPU(s) list: 0-19 Thread(s) per core: 1 Core(s) per socket: 20 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz Stepping: 7 CPU MHz: 2499.994 BogoMIPS: 4999.98 Hypervisor vendor: KVM Virtualization type: full L1d cache: 640 KiB L1i cache: 640 KiB L2 cache: 80 MiB L3 cache: 16 MiB NUMA node0 CPU(s): 0-19 Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status Vulnerability Itlb multihit: KVM: Vulnerable Vulnerability L1tf: Mitigation; PTE Inversion Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Meltdown: Mitigation; PTI Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Retbleed: Mitigation; IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch topoext cpuid_fault invpcid_single pti ssbd ibrs ibpb fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] optree==0.13.1 [pip3] pytorch-triton==3.1.0+cf34004b8a [pip3] torch==2.6.0.dev20241115+cu124 [pip3] torchaudio==2.5.0.dev20241115+cu124 [pip3] torchvision==0.20.0.dev20241115+cu124 [pip3] triton==3.0.0 [conda] numpy 1.26.4 pypi_0 pypi [conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi [conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi [conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi [conda] optree 0.13.1 pypi_0 pypi [conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi [conda] torch 2.6.0.dev20241115+cu124 pypi_0 pypi [conda] torchaudio 2.5.0.dev20241115+cu124 pypi_0 pypi [conda] torchvision 0.20.0.dev20241115+cu124 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi
ZailiWang commented 2 days ago

I can reproduce the issue, and it is NOT device-specific as can be reproduced on both CUDA and CPU devices.

shaoyuyoung commented 1 day ago

Here is the minified repro (note that the output may be different each time) :

import torch
import torch.nn as nn

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.bn = nn.BatchNorm2d(1)
        self.global_avg_pool = nn.AdaptiveAvgPool2d((1, 1))

    def forward(self, x):
        x = self.bn(x)
        x = self.global_avg_pool(x)
        return x

m = Model()

x = torch.randn(1, 1, 2, 2)

output = m(x)
compiled_model = torch.compile(m)

c_output = compiled_model(x)

print(output)
print(c_output)
res = torch.allclose(output, c_output)
print(res)

"""
tensor([[[[-2.2352e-08]]]], grad_fn=<MeanBackward1>)
tensor([[[[1.8626e-08]]]], grad_fn=<CompiledFunctionBackward>)
False
"""