Open luoluo123123123123 opened 1 year ago
In the annotation, we use relative coordinates. Thus, when you want to use the annotation, you can multiply them by the width and height of the image. Of course, if we train on 512512, DG-YOLO may get worse on 416416. So we adopt multiscale training to alleviate the problem. This is nothing about the domain.
thank you ! Is that mean, if target images are much more big than source images, we can do it either?
Yes. But you better resize the target images to the same level as the source images.
If we adopt WQT to create val_8 (do not appear in training set), : 1.how to keep annotation correct when we create val_8 (if we set WQT output size=512,it's a square maybe original picture is (512,1024) it's a rectangle,we can not get the correct annotation) 2.Is there any requirement in target domain picture's size? (e.g. if we use 512512 DG-YOLO may get worse than 416416 ?)