Open bfreskura opened 7 years ago
@barty777 Did you verify the same result for the crop parameter (since presumably that also induces an offset in the bounding box)? I would guess in the case of crops, the shift is small enough (32px by default) that the bounding box would mostly still contain the object, if it was left unchanged.
However this config file is provided with the example KITTI data, and that doesn't seem to be adversely affected - it still trains well. Unless I'm missing something?
This would certainly corroborate reports from people on here who have said that manually augmenting their data produced better results. Otherwise, if you could rely on the DetectNet augmentations then there should be no need to manually generate datasets with flips etc.
Let's say that I have the augmentation layer written as below:
I'm doing object detection and I'm not sure if the proper transformations are also applied to the bounding boxes. I'm referring to the scale, rotation, and flip in particular because they affect the object location in the image. I ran some experiments and it seems that bounding boxes are not affected by these transformations which results in poor results while learning. When I've disabled the above-mentioned transformations, the results significantly improved.
Can someone explain this in more detail?