Open woodszp opened 1 year ago
If you are using the Swin Transformer, you may need to modify or add the following code: Please add this code at the end.
def forward():
...
outputs = self.activations_and_grads(input_tensor)
# add this way
out_nor = self.model.norm(outputs)
out_poll = torch.flatten(self.model.avgpool(out_nor.transpose(1, 2)), 1)
outputs = self.model.head(out_poll)
Thank you for sharing the code, It is important not to ignore the output that hasn't been flattened
File "/home/xxx/Project/latested/camvisual.py", line 227, in
grayscale_cam = cam(input_tensor=img_tensor, targets=target_category)
File "/home/xxx/miniconda3/lib/python3.10/site-packages/pytorch_grad_cam/base_cam.py", line 188, in call
return self.forward(input_tensor,
File "/home/xxx/miniconda3/lib/python3.10/site-packages/pytorch_grad_cam/base_cam.py", line 84, in forward
loss.backward(retain_graph=True)
File "/home/xxx/miniconda3/lib/python3.10/site-packages/torch/_tensor.py", line 488, in backward
torch.autograd.backward(
File "/home/xxx/miniconda3/lib/python3.10/site-packages/torch/autograd/init.py", line 190, in backward
gradtensors = _make_grads(tensors, gradtensors, is_grads_batched=False)
File "/home/xxx/miniconda3/lib/python3.10/site-packages/torch/autograd/init.py", line 85, in _make_grads
raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs