I wrote a simple loss function in {eval_rcnn.py → eval_one_epoch_joint() → for loop (for data in dataloader: ...)}.
I have enabled all gradient propagation by replacing "torch.no_grad()" with "torch.set_grad_enabled(True)" and set "inputs.requires_grad = True". Here is the use of my loss:
File "/home/jqwu/Codes/PointRCNN/tools/simple_attack.py", line 974, in
eval_single_ckpt(root_result_dir)
File "/home/jqwu/Codes/PointRCNN/tools/simple_attack.py", line 832, in eval_single_ckpt
eval_one_epoch(model, test_loader, epoch_id, root_result_dir, logger)
File "/home/jqwu/Codes/PointRCNN/tools/simple_attack.py", line 759, in eval_one_epoch
ret_dict = eval_one_epoch_joint(model, dataloader, epoch_id, result_dir, logger)
File "/home/jqwu/Codes/PointRCNN/tools/simple_attack.py", line 580, in eval_one_epoch_joint
loss_adv_cls.backward()
File "/home/jqwu/.conda/envs/PointRCNN-py37/lib/python3.7/site-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/jqwu/.conda/envs/PointRCNN-py37/lib/python3.7/site-packages/torch/autograd/init.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Expected isFloatingType(grads[i].type().scalarType()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
I have tried the following solutions, but they didn't work in my case:
I wrote a simple loss function in {eval_rcnn.py → eval_one_epoch_joint() → for loop (for data in dataloader: ...)}. I have enabled all gradient propagation by replacing "torch.no_grad()" with "torch.set_grad_enabled(True)" and set "inputs.requires_grad = True". Here is the use of my loss:
The ".backward()" end up with a Runtime Error:
I have tried the following solutions, but they didn't work in my case:
I am using CUDA 10.1, torch 1.4, python 3.7. I have spent a lot of time on this. Anyone has solved a similar problem? 😿