microsoft / Swin-Transformer

This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
https://arxiv.org/abs/2103.14030
MIT License
13.72k stars 2.04k forks source link

Argument interpolation should be of type InterpolationMode instead of int. #122

Closed zhaowei0315 closed 2 years ago

zhaowei0315 commented 3 years ago

Error

Argument interpolation should be of type InterpolationMode instead of int.

[2021-09-10 00:19:44 swin_small_patch4_window7_224](main.py 91): INFO number of params: 49606258 [2021-09-10 00:19:44 swin_small_patch4_window7_224](main.py 94): INFO number of GFLOPs: 8.746520064 All checkpoints founded in /home/zfe5szh/SwinTransformer_main/output/swin_small_patch4_window7_224/5_Traffic_Sign_Classification: [] [2021-09-10 00:19:44 swin_small_patch4_window7_224](main.py 118): INFO no checkpoint found in /home/zfe5szh/SwinTransformer_main/output/swin_small_patch4_window7_224/5_Traffic_Sign_Classification, ignoring auto resume [2021-09-10 00:19:44 swin_small_patch4_window7_224](utils.py 20): INFO ==============> Resuming form /home/zfe5szh/SwinTransformer/checkpoints/swin_small_patch4_window7_224.pth.................... [2021-09-10 00:19:44 swin_small_patch4_window7_224](utils.py 27): INFO [2021-09-10 00:19:46 swin_small_patch4_window7_224](main.py 268): INFO Test: [0/613] Time 2.285 (2.285) Loss 8.5132 (8.5132) Acc@1 0.000 (0.000) Acc@5 0.000 (0.000)Mem 1596MB [2021-09-10 00:20:21 swin_small_patch4_window7_224](main.py 268): INFO Test: [100/613] Time 0.349 (0.370) Loss 8.3673 (8.3677) Acc@1 0.000 (0.000) Acc@5 0.000 (0.000)Mem 1696MB [2021-09-10 00:20:57 swin_small_patch4_window7_224](main.py 268): INFO Test: [200/613] Time 0.346 (0.362) Loss 8.3129 (8.3787) Acc@1 0.000 (0.000) Acc@5 0.000 (0.016)Mem 1696MB [2021-09-10 00:21:32 swin_small_patch4_window7_224](main.py 268): INFO Test: [300/613] Time 0.357 (0.360) Loss 8.2449 (8.3805) Acc@1 0.000 (0.000) Acc@5 0.000 (0.016)Mem 1696MB [2021-09-10 00:22:08 swin_small_patch4_window7_224](main.py 268): INFO Test: [400/613] Time 0.354 (0.358) Loss 8.4108 (8.3801) Acc@1 0.000 (0.000) Acc@5 0.000 (0.016)Mem 1696MB [2021-09-10 00:22:43 swin_small_patch4_window7_224](main.py 268): INFO Test: [500/613] Time 0.354 (0.357) Loss 8.3380 (8.3798) Acc@1 0.000 (0.000) Acc@5 0.000 (0.016)Mem 1696MB [2021-09-10 00:23:19 swin_small_patch4_window7_224](main.py 268): INFO Test: [600/613] Time 0.347 (0.357) Loss 8.3023 (8.3819) Acc@1 0.000 (0.000) Acc@5 0.000 (0.021)Mem 1696MB [2021-09-10 00:23:23 swin_small_patch4_window7_224](main.py 274): INFO * Acc@1 0.000 Acc@5 0.026 [2021-09-10 00:23:23 swin_small_patch4_window7_224](main.py 123): INFO Accuracy of the network on the 39209 test images: 0.0% [2021-09-10 00:23:23 swin_small_patch4_window7_224](main.py 131): INFO Start training [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) /home/zfe5szh/.conda/envs/felix/lib/python3.7/site-packages/torchvision/transforms/functional.py:387: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " /home/zfe5szh/.conda/envs/felix/lib/python3.7/site-packages/torchvision/transforms/functional.py:387: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " /home/zfe5szh/.conda/envs/felix/lib/python3.7/site-packages/torchvision/transforms/functional.py:387: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " /home/zfe5szh/.conda/envs/felix/lib/python3.7/site-packages/torchvision/transforms/functional.py:387: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " /home/zfe5szh/.conda/envs/felix/lib/python3.7/site-packages/torchvision/transforms/functional.py:387: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " /home/zfe5szh/.conda/envs/felix/lib/python3.7/site-packages/torchvision/transforms/functional.py:387: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " /home/zfe5szh/.conda/envs/felix/lib/python3.7/site-packages/torchvision/transforms/functional.py:387: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " /home/zfe5szh/.conda/envs/felix/lib/python3.7/site-packages/torchvision/transforms/functional.py:387: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. "Argument interpolation should be of type InterpolationMode instead of int. " /home/zfe5szh/SwinTransformer_main/main.py:197: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_gradnorm; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior. grad_norm = torch.nn.utils.clip_gradnorm(amp.master_params(optimizer), config.TRAIN.CLIP_GRAD) Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0 [2021-09-10 00:23:26 swin_small_patch4_window7_224](main.py 221): INFO Train: [0/100][0/612] eta 0:35:40 lr 0.000000 time 3.4983 (3.4983) loss 8.0928 (8.0928) grad_norm inf (inf) mem 9364MB Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 ^A[2021-09-10 00:25:04 swin_small_patch4_window7_224](main.py 221): INFO Train: [0/100][100/612] eta 0:08:32 lr 0.000001 time 0.9778 (1.0000) loss 7.3623 (8.0173) grad_norm 21.0039 (nan) mem 9959MB

Xuguozi commented 3 years ago

i meet the same problem, did you solve it?

zeliu98 commented 2 years ago

This has been fixed: https://github.com/microsoft/Swin-Transformer/pull/152#issue-1079386484