czczup / ViT-Adapter

[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
https://arxiv.org/abs/2205.08534
Apache License 2.0
1.27k stars 140 forks source link

TypeError #96

Open Khinchine opened 1 year ago

Khinchine commented 1 year ago

True check_forward_equal_with_pytorch_float: max_abs_err 4.66e-10 max_rel_err 1.13e-07 Traceback (most recent call last): File "test.py", line 109, in check_gradient_numerical(channels, True, True, True) File "test.py", line 96, in check_gradient_numerical gradok = gradcheck( File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/autograd/gradcheck.py", line 1245, in gradcheck return _gradcheck_helper(*args) File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/autograd/gradcheck.py", line 1258, in _gradcheck_helper _gradcheck_real_imag(gradcheck_fn, func, func_out, tupled_inputs, outputs, eps, File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/autograd/gradcheck.py", line 930, in _gradcheck_real_imag gradcheck_fn(func, func_out, tupled_inputs, outputs, eps, File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/autograd/gradcheck.py", line 974, in _slow_gradcheck analytical = _check_analytical_jacobian_attributes(tupled_inputs, o, nondet_tol, check_grad_dtypes) File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/autograd/gradcheck.py", line 516, in _check_analytical_jacobian_attributes vjps1 = _compute_analytical_jacobian_rows(vjp_fn, output.clone()) File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/autograd/gradcheck.py", line 608, in _compute_analytical_jacobian_rows grad_inputs = vjp_fn(grad_out_base) File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/autograd/gradcheck.py", line 509, in vjp_fn return torch.autograd.grad(output, diff_input_list, grad_output, File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/autograd/init.py", line 226, in grad return Variable._execution_engine.run_backward( File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/autograd/function.py", line 87, in apply return self._forward_cls.backward(self, args) # type: ignore[attr-defined] File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/autograd/function.py", line 204, in wrapper outputs = fn(ctx, args) File "/home/ccq/anaconda3/envs/ViT-Adapter-main/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 236, in decorate_bwd return bwd(args, **kwargs) File "/home/ccq/Data/ViT-Adapter-main/detection/ops/functions/ms_deform_attn_func.py", line 43, in backward MSDA.ms_deform_attn_backward( TypeError: ms_deform_attn_backward(): incompatible function arguments. The following argument types are supported: (value: at::Tensor, value_spatial_shapes: at::Tensor, value_level_start_index: at::Tensor, sampling_locations: at::Tensor, attention_weights: at::Tensor, grad_output: at::Tensor, grad_value: at::Tensor, grad_sampling_loc: at::Tensor, grad_attn_weight: at::Tensor, im2col_step: int) -> None Invoked with: tensor([[[[2.8306e-03, 9.4753e-03, 9.3084e-03, ..., 6.8304e-03, 8.3203e-03, 1.7387e-03], [8.2308e-03, 3.5872e-03, 3.3102e-03, ..., 5.9733e-03, 9.4355e-03, 4.3745e-03]],...

首先感谢作者的贡献以及开源代码,但我在运行 ln -s ../detection/ops ./ cd ops & sh make.sh # compile deformable attention 后运行python test.py出现如上错误,请问是什么原因呢?非常感谢

windygoo commented 1 year ago

I meet the same problem, have you solved it ?

yuecao0119 commented 1 year ago

Thank you for your feedback. You can carefully check whether the compiled script make.sh executed successfully and did not report any errors. Or would it be convenient for you to provide your environment information and dependency versions?