Closed makangzhe closed 1 year ago
Well, please give me the value of _boxes & boxesscaled of the specific failure case in this line (https://github.com/zhiyuanyou/SAFECount/blob/main/models/utils.py#L87).
Well, please give me the value of _boxes & boxesscaled of the specific failure case in this line (https://github.com/zhiyuanyou/SAFECount/blob/main/models/utils.py#L87).
boxes :tensor([[825.3125, 233.9375, 856.1875, 262.4375], [312.3125, 165.0625, 377.6250, 203.0625], [837.1875, 233.9375, 868.0625, 245.8125]], device='cuda:0', dtype=torch.float64) boxes_scaled : tensor([[206.3281, 58.4844, 214.0469, 65.6094], [ 78.0781, 41.2656, 94.4062, 50.7656], [209.2969, 58.4844, 217.0156, 61.4531]], device='cuda:0', dtype=torch.float64)
For your data, I do not think the data have something wrong. How about h,w (https://github.com/zhiyuanyou/SAFECount/blob/main/models/utils.py#L86)?
For your data, I do not think the data have something wrong. How about h,w (https://github.com/zhiyuanyou/SAFECount/blob/main/models/utils.py#L86)?
feat.shape is (152,152) ,
https://github.com/zhiyuanyou/SAFECount/blob/de067f9f1ca2caea432dd4c2e6d9ec9b2a169ebf/models/utils.py#L93C1-L93C1
When the code runs to this line,boxes_scaled value is
tensor([[206., 58., 152., 66.],
[ 78., 41., 95., 51.],
[209., 58., 152., 62.]], device='cuda:0', dtype=torch.float64)
The value of Axis 2 is greater than the value of Axis 0, causing the Tensor to be sliced with a value 0 in axis 2
This is not the error of code. It is because your annotation is out of image.
This is not the error of code. It is because your annotation is out of image.
Thanks for your reply, I solved this problem by adding these two lines of code in here https://github.com/zhiyuanyou/SAFECount/blob/de067f9f1ca2caea432dd4c2e6d9ec9b2a169ebf/models/utils.py#L92 , can I ask if these lines of code will affect the training results a lot?
boxes_scaled[:, 0] = torch.clamp_max(boxes_scaled[:, 0], h-1)
boxes_scaled[:, 1] = torch.clamp_max(boxes_scaled[:, 1], w-1)
I think it is right for training.
Thanks for your amazing work ! I try to train this code im my own dataset , I generate train.json and test.json. and hvae checked bboxes and point is true , my orginal image size is (1920 ,1080) , when i train this code, this error info
i debug this code, find error happened in this function rop_roi_feat in models.utils
def crop_roi_feat(feat, boxes, out_stride): """ feat: 1 x c x h x w boxes: m x 4, 4: [y_tl, x_tl, y_br, x_br] """ _, _, h, w = feat.shape boxes_scaled = boxes / out_stride boxes_scaled[:, :2] = torch.floor(boxes_scaled[:, :2]) # y_tl, x_tl: floor boxes_scaled[:, 2:] = torch.ceil(boxes_scaled[:, 2:]) # y_br, x_br: ceil boxes_scaled[:, :2] = torch.clamp_min(boxes_scaled[:, :2], 0) boxes_scaled[:, 2] = torch.clamp_max(boxes_scaled[:, 2], h) boxes_scaled[:, 3] = torch.clamp_max(boxes_scaled[:, 3], w) feat_boxes = [] for idx_box in range(0, boxes.shape[0]): y_tl, x_tl, y_br, x_br = boxes_scaled[idx_box] y_tl, x_tl, y_br, x_br = int(y_tl), int(x_tl), int(y_br), int(x_br) feat_box = feat[:, :, y_tl : (y_br + 1), x_tl : (x_br + 1)] if feat_box.shape[2] == 0: continue feat_boxes.append(feat_box) return feat_boxes
in this function , (y_br + 1) > x_tl is the reason of feat_box.shape[2]=0 , how to solve this , please help me!