Open xjoshramos opened 4 years ago
feature shape:torch.Size([1, 256, 100, 136]) feature shape:torch.Size([1, 256, 50, 68]) feature shape:torch.Size([1, 256, 25, 34]) feature shape:torch.Size([1, 256, 13, 17]) feature shape:torch.Size([1, 256, 7, 9]) Traceback (most recent call last): File "detection/demo_retinanet.py", line 185, in main(arguments) File "detection/demo_retinanet.py", line 147, in main mask, box, class_id = grad_cam(inputs) # cam mask File "detectors/Grad-CAM.pytorch/detection/grad_cam_retinanet.py", line 61, in call output = self.net.predict([inputs]) File "detectors/detectron2/detectron2/modeling/meta_arch/retinanet.py", line 473, in predict results = self.inference(box_cls, box_delta, anchors, images.image_sizes) File "detectors/detectron2/detectron2/modeling/meta_arch/retinanet.py", line 316, in inference anchors, pred_logits_per_image, deltas_per_image, tuple(image_size) File "detectors/detectron2/detectron2/modeling/meta_arch/retinanet.py", line 428, in inference_single_image predicted_boxes = self.box2box_transform.apply_deltas(box_reg_i, anchors_i.tensor) File "detectors/detectron2/detectron2/modeling/box_regression.py", line 100, in apply_deltas pred_ctr_x = dx * widths[:, None] + ctr_x[:, None] RuntimeError: The size of tensor a (25) must match the size of tensor b (0) at non-singleton dimension 1
gradient = self.gradient[feature_level][0].cpu().data.numpy() # [C,H,W] IndexError: list index out of range
@xjoshramos 您好,我这边测试没有这个问题,请问您是测试工程中的样例图像报错么,是否有修改代码?
feature shape:torch.Size([1, 256, 100, 136]) feature shape:torch.Size([1, 256, 50, 68]) feature shape:torch.Size([1, 256, 25, 34]) feature shape:torch.Size([1, 256, 13, 17]) feature shape:torch.Size([1, 256, 7, 9]) Traceback (most recent call last): File "detection/demo_retinanet.py", line 185, in
main(arguments)
File "detection/demo_retinanet.py", line 147, in main
mask, box, class_id = grad_cam(inputs) # cam mask
File "detectors/Grad-CAM.pytorch/detection/grad_cam_retinanet.py", line 61, in call
output = self.net.predict([inputs])
File "detectors/detectron2/detectron2/modeling/meta_arch/retinanet.py", line 473, in predict
results = self.inference(box_cls, box_delta, anchors, images.image_sizes)
File "detectors/detectron2/detectron2/modeling/meta_arch/retinanet.py", line 316, in inference
anchors, pred_logits_per_image, deltas_per_image, tuple(image_size)
File "detectors/detectron2/detectron2/modeling/meta_arch/retinanet.py", line 428, in inference_single_image
predicted_boxes = self.box2box_transform.apply_deltas(box_reg_i, anchors_i.tensor)
File "detectors/detectron2/detectron2/modeling/box_regression.py", line 100, in apply_deltas
pred_ctr_x = dx * widths[:, None] + ctr_x[:, None]
RuntimeError: The size of tensor a (25) must match the size of tensor b (0) at non-singleton dimension 1