Closed ch00486259 closed 3 years ago
have you try this project? I tried it several days ago, but failed. I think there are some codes is wrong, maybe, it is my issue.
I did suspect that there might be something wrong with the preprocess boxes function when I was working on it. I tried overfitting the model with just two images in the dataset. After 1500 epochs, I passed one of the training image with only 2 objects but it outputs as multiple misplaced boxes. Increasing the detection iou threshold does not even seem to be the relevant solution here.
What did you do to fix this scaling by stride issue and did it work?
Thanks
sorry, i am facing some problems about this project, i have no idea...
At 2020-07-22 19:05:18, "Jian Hao" notifications@github.com wrote:
I did suspect that there might be something wrong with the preprocess boxes function when I was working on it. I tried overfitting the model with just two images in the dataset. After 1500 epochs, I passed one of the training image with only 2 objects but it outputs as multiple misplaced boxes. Increasing the detection iou threshold does not even seem to be the relevant solution here.
What did you do to fix this scaling by stride issue and did it work?
Thanks
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
I am also facing this issue. The thing is the dimension for self.anchors is (2,3,2) for tiny yolo v3 and (3,2,3) for yolo v3. So, for tiny it says so as the dimension is 2 but the loop is run for three iterations. I don't know what is the fix. Is there any fix for the issue?
I am also facing this issue. The thing is the dimension for self.anchors is (2,3,2) for tiny yolo v3 and (3,2,3) for yolo v3. So, for tiny it says so as the dimension is 2 but the loop is run for three iterations. I don't know what is the fix. Is there any fix for the issue?
@ch00486259 Were you able to find a fix?
in dataset.py,function [preprocess_true_boxes ]
anchors_xywh[:, 2:4] = self.anchors[i] iou_scale = utils.bbox_iou( bbox_xywh_scaled[i][np.newaxis, :], anchors_xywh )
there is a issus
bbox_xywh_scaled is scale by stride, but anchors_xywh[:,2:4] is anchor actual size so the result of 'utils.bbox_iou' must be error
I agree the iou should be computed between the original bbox and anchors. I saw some other repo didn't use this scaled bbox for computing iou.
utils.bbox_iou
calculate IOU based on scaled value and anchor's position is already scaled(=based on bbox_xywh_scaled
), so we just need to scale anchor's width/height.
I think the line below
https://github.com/hunglc007/tensorflow-yolov4-tflite/blob/9f16748aa3f45ff240608da4bd9b1216a29127f5/core/dataset.py#L335
should be modified as follow...
anchors_xywh[:, 2:4] = self.anchors[i] / self.strides[i]
I'm finetuning yolov4-416 to detect a single class, and I'll comment if it's applied well as the results come out.
@ryj0902 Hi, have you solved the problem by appling the modification you gave?
@yl1994yl Well, results were slightly different from what I expected, but I don't know if that is because my solution is wrong. Specifically, I trained to detect faces on the WIDER FACE dataset, and satisfactorily detects large faces, but for small faces, the bounding box is stretched horizontally. It seems to be more clear by observing the difference between when the above modification is not applied.
in dataset.py,function [preprocess_true_boxes ]
there is a issus
bbox_xywh_scaled is scale by stride, but anchors_xywh[:,2:4] is anchor actual size so the result of 'utils.bbox_iou' must be error