pytorch / vision

Datasets, Transforms and Models specific to Computer Vision
https://pytorch.org/vision
BSD 3-Clause "New" or "Revised" License
15.78k stars 6.89k forks source link

How to write your own v2 transforms example does not work #8515

Open TonyCongqianWang opened 2 weeks ago

TonyCongqianWang commented 2 weeks ago

πŸ› Describe the bug

I copy pasted the custom transform from your tutorial page and inserted it into the transform pipeline in your reference/detection/presets.py script. When trying to run, I get the following error.

File "site-packages/torchvision/transforms/v2/_container.py", line 51, in forward outputs = transform(inputs) ^^^^^^^^^^^^^^^^^^ File "site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "site-packages/torch/nn/modules/module.py", line 1538, in _call_impl if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks ^^^^^^^^^^^^^^^^^^^^ File "site-packages/torch/nn/modules/module.py", line 1709, in getattr raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'") AttributeError: 'MyCustomTransform' object has no attribute '_backward_hooks'

Versions

Collecting environment information... PyTorch version: 2.3.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.26.3 Libc version: glibc-2.31

Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-113-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB GPU 1: NVIDIA A100-PCIE-40GB

Nvidia driver version: 550.90.07 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.1.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.1.0 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn.so.8.5.0 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.5.0 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.5.0 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.5.0 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.5.0 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.5.0 /usr/local/cuda-11.6/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.5.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 43 bits physical, 48 bits virtual CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7702P 64-Core Processor Stepping: 0 Frequency boost: enabled CPU MHz: 1540.122 CPU max MHz: 2183,5930 CPU min MHz: 1500,0000 BogoMIPS: 3992.22 Virtualization: AMD-V L1d cache: 2 MiB L1i cache: 2 MiB L2 cache: 32 MiB L3 cache: 256 MiB NUMA node0 CPU(s): 0-127 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Mitigation; safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es

Versions of relevant libraries: [pip3] efficientnet_pytorch==0.7.1 [pip3] numpy==1.26.4 [pip3] onnx==1.16.1 [pip3] onnxruntime==1.18.1 [pip3] torch==2.3.1 [pip3] torchstat==0.0.7 [pip3] torchsummary==1.5.1 [pip3] torchvision==0.18.1 [conda] efficientnet-pytorch 0.7.1 pypi_0 pypi [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.3.1 pypi_0 pypi [conda] torchstat 0.0.7 pypi_0 pypi [conda] torchsummary 1.5.1 pypi_0 pypi [conda] torchvision 0.18.1 pypi_0 pypi

NicolasHug commented 1 week ago

Hi @TonyCongqianWang ,

I copy pasted the custom transform from your tutorial page and inserted it into the transform pipeline in your reference/detection/presets.py script.

Can you share a minimal reproducing example? It is otherwise impossible to help

TonyCongqianWang commented 1 week ago

It is a bit difficult for me to share a minimal reproducing example since I lack the experience with torch and have no idea what causes the error. I added

class MyCustomTransform([torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module)):
    def forward(self, [img](https://pytorch.org/docs/stable/tensors.html#torch.Tensor), bboxes, [label](https://docs.python.org/3/library/functions.html#int)):  # we assume inputs are always structured like this
        print(
            f"I'm transforming an image of shape {[img](https://pytorch.org/docs/stable/tensors.html#torch.Tensor).shape} "
            f"with bboxes = {bboxes}\n{[label](https://docs.python.org/3/library/functions.html#int) = }"
        )
        # Do some transformations. Here, we're just passing though the input
        return [img](https://pytorch.org/docs/stable/tensors.html#torch.Tensor), bboxes, [label](https://docs.python.org/3/library/functions.html#int)

To the file And added the Transform to the list of transforms. When I run the recipe for ssd (using v2 transforms) I get the error mentioned before