pytorch / vision

Datasets, Transforms and Models specific to Computer Vision
https://pytorch.org/vision
BSD 3-Clause "New" or "Revised" License
16.12k stars 6.94k forks source link

ViT models are not traceable in eval mode #7517

Closed hassonofer closed 1 year ago

hassonofer commented 1 year ago

🐛 Describe the bug

Attempting to jit trace the ViT model in eval mode raises an exception TracingCheckError: Tracing failed sanity checks! ERROR: Graphs differed across invocations!

Minimal code to reproduce

import torch
import torchvision

net = torchvision.models.vit_b_32()
net.eval();
torch.jit.trace(net, torch.rand((1, 3, 224, 224)))

While the same snippet in train mode will pass tracing.

This affects the ability to use add_graph of the SummaryWriter module for TensorBoard.

Versions

Collecting environment information... PyTorch version: 2.0.0+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A

OS: Debian GNU/Linux 11 (bullseye) (x86_64) GCC version: (Debian 10.2.1-6) 10.2.1 20210110 Clang version: Could not collect CMake version: version 3.26.1 Libc version: glibc-2.31

Python version: 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] (64-bit runtime) Python platform: Linux-6.1.0-0.deb11.5-amd64-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.7.99 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA RTX A5000 GPU 1: NVIDIA RTX A5000

Nvidia driver version: 530.30.02 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 25 Model: 97 Model name: AMD Ryzen 9 7950X3D 16-Core Processor Stepping: 2 Frequency boost: enabled CPU MHz: 4682.728 CPU max MHz: 5758.5928 CPU min MHz: 3000.0000 BogoMIPS: 8399.56 Virtualization: AMD-V L1d cache: 512 KiB L1i cache: 512 KiB L2 cache: 16 MiB L3 cache: 192 MiB NUMA node0 CPU(s): 0-31 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d

Versions of relevant libraries: [pip3] flake8==6.0.0 [pip3] flake8-pep585==0.1.7 [pip3] mypy==1.2.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.23.5 [pip3] torch==2.0.0 [pip3] torch-model-archiver==0.7.1 [pip3] torch-workflow-archiver==0.2.7 [pip3] torchaudio==2.0.1 [pip3] torchinfo==1.7.2 [pip3] torchmetrics==0.11.4 [pip3] torchserve==0.7.1 [pip3] torchvision==0.15.1 [pip3] triton==2.0.0 [conda] Could not collect

vfdev-5 commented 1 year ago

Complete error message: https://gist.github.com/vfdev-5/d988dde79ebd89a621fdab73711165ee

vfdev-5 commented 1 year ago

The following works:

import torch
import torchvision

net = torchvision.models.vit_b_32()
net.eval()
with torch.no_grad():
    tnet = torch.jit.trace(net, torch.rand((1, 3, 224, 224)))

I do not know exactly why it errors in the description code snippet. Maybe, as there were gradients in the model params, this may lead to different code paths...

hassonofer commented 1 year ago

Ohh... that is a much nicer work around for me, great catch, thanks !

Quite harmless to be honest

with torch.no_grad():
    summary_writer.add_graph(net_without_ddp, torch.rand(sample_shape, device=device))
NicolasHug commented 1 year ago

Sounds like this can be closed (thanks @vfdev-5 for looking into it!). Feel free to re-open if needed