Open pikouli opened 3 years ago
Not sure if this applies to your case, but in my case, this is because the width and height of the predicted box are negative, causing the box in XYXY format to degenerate.
In the original code the predicted box size is normalized by a sigmoid function so this error does not happen. But if your code does not do that, you need to ensure the box is valid before computing the gIoU.
My current fix is to take width = exp(width) to make sure the value is positive and there is no degeneration.
did you find a solution ? I'am facing the same problem, I'm trying to do instance segmentation
Hi did you fix the problem? I'm getting same error for custom dataset in coco format.
File "/home/cqupt/fxh/detr-main/models/matcher.py", line 74, in forward cost_giou = -generalized_box_iou(box_cxcywh_to_xyxy(out_bbox), box_cxcywh_to_xyxy(tgt_bbox)) File "/home/cqupt/fxh/detr-main/util/box_ops.py", line 51, in generalized_box_iou assert (boxes1[:, 2:] >= boxes1[:, :2]).all() RuntimeError: CUDA error: device-side assert triggered terminate called after throwing an instance of 'std::runtime_error' what(): NCCL error in: /pytorch/torch/lib/c10d/../c10d/NCCLUtils.hpp:136, unhandled cuda error, NCCL version 2.7.8
I met this error,can you tell me how can I solve it?Thanks a lot.
Hi, I am trying to use a custom dataset but the following error gets triggered:
It seems to come from my dataset as boxes2 are the target values. When printing the tgt_box used in matcher I get the following tensor which looks not appropriate:
I have tried simplifying the dataset and annotations. I am now using a dataset with two images and the following instances annotations:
and these panoptic annotations:
I am using torchvision 0.6.0 and torch 1.5.0.
I can't get my head around what would be the problem. The image size correspond to the real image sizes and bounding boxes seem non-degenerated and not exceeding the images bounds. I am not sure how to compute the area though. Would you please have any clue of what is triggering the error?
For instance, must it be the case that there is at least one pixel of the color of the category in each bounding box? What are the constraint on the dataset?