pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
82.93k stars 22.36k forks source link

RuntimeError: "grid_sampler_2d_cuda" not implemented for 'BFloat16' #112575

Open andife opened 11 months ago

andife commented 11 months ago

🐛 Describe the bug

Executing a pytorch-lightning models results in the following error:

" File "/local_data/user1/miniforge3/envs/lightning_py310/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 294, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/local_data/user1/miniforge3/envs/lightning_py310/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 380, in training_step
    return self.model.training_step(*args, **kwargs)
  File "/local_data/user1/SPACE/Spherinator/models/rotational_variational_autoencoder_power.py", line 145, in training_step
    rotate = functional.rotate(images, 360.0 / self.rotations * i, expand=False)
  File "/local_data/user1/miniforge3/envs/lightning_py310/lib/python3.10/site-packages/torchvision/transforms/functional.py", line 1131, in rotate
    return F_t.rotate(img, matrix=matrix, interpolation=interpolation.value, expand=expand, fill=fill)
  File "/local_data/user1/miniforge3/envs/lightning_py310/lib/python3.10/site-packages/torchvision/transforms/_functional_tensor.py", line 667, in rotate
    return _apply_grid_transform(img, grid, interpolation, fill=fill)
  File "/local_data/user1/miniforge3/envs/lightning_py310/lib/python3.10/site-packages/torchvision/transforms/_functional_tensor.py", line 558, in _apply_grid_transform
    img = grid_sample(img, grid, mode=mode, padding_mode="zeros", align_corners=False)
  File "/local_data/user1/miniforge3/envs/lightning_py310/lib/python3.10/site-packages/torch/nn/functional.py", line 4304, in grid_sample
    return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners)
RuntimeError: "grid_sampler_2d_cuda" not implemented for 'BFloat16'
STAGE:2023-11-01 10:19:28 832709:832709 ActivityProfilerController.cpp:312] Completed Stage: Warm Up"

Is there a way to use 16bit? without getting this error? Maybe for that part it is not tried to use fp16? the configuration is:

trainer: max_epochs: 51 accelerator: gpu devices: [3] precision: "bf16-mixed"

Thank you

Versions

Collecting environment information... PyTorch version: 2.2.0.dev20231006+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.27.5 Libc version: glibc-2.35

Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime) Python platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.2.128 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A40 GPU 1: NVIDIA A40 GPU 2: NVIDIA A40 GPU 3: NVIDIA A40

Nvidia driver version: 535.86.10 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 112 On-line CPU(s) list: 0-111 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz CPU family: 6 Model: 106 Thread(s) per core: 2 Core(s) per socket: 28 Socket(s): 2 Stepping: 6 BogoMIPS: 4000.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities Virtualization: VT-x L1d cache: 2.6 MiB (56 instances) L1i cache: 1.8 MiB (56 instances) L2 cache: 70 MiB (56 instances) L3 cache: 84 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79,81,83,85,87,89,91,93,95,97,99,101,103,105,107,109,111 Vulnerability Gather data sampling: Mitigation; Microcode Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected

Versions of relevant libraries: [pip3] mypy==1.5.1 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.0 [pip3] onnx==1.14.1 [pip3] onnxscript==0.1.0.dev20231006 [pip3] pytorch-lightning==2.0.9 [pip3] pytorch-triton==2.1.0+6e4932cda8 [pip3] torch==2.2.0.dev20231006+cu121 [pip3] torchaudio==2.2.0.dev20231006+cpu [pip3] torchmetrics==1.2.0 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.17.0.dev20231006+cpu [pip3] triton==2.0.0 [conda] numpy 1.26.0 pypi_0 pypi [conda] pytorch-lightning 2.0.9 pypi_0 pypi [conda] pytorch-triton 2.1.0+6e4932cda8 pypi_0 pypi [conda] torch 2.2.0.dev20231006+cu121 pypi_0 pypi [conda] torchaudio 2.2.0.dev20231006+cpu pypi_0 pypi [conda] torchmetrics 1.2.0 pypi_0 pypi [conda] torchsummary 1.5.1 pypi_0 pypi [conda] torchvision 0.17.0.dev20231006+cpu pypi_0 pypi [conda] triton 2.0.0 pypi_0 pypi

cc @ptrblck

patrick-tssn commented 6 months ago

I've encountered a recurring error that I suspect is related to the implementation of bfloat16 in grid_sample, similar to the issue described here issue. Upgrading PyTorch to version 2.2.1 might resolve this problem, but it appears to be incompatible with flash_attention_2, which makes it challenging for me to test the solution personally. @andife, have you attempted to upgrade PyTorch? Also, do you know if there are any plans to incorporate bfloat16 support into grid_sample? Thanks a lot.

andife commented 6 months ago

@patrick-tssn not yet. I got the latest dev version at the time of the evaluation, but haven't looked at it again since posting. I will hopefully do so shortly.