Closed geoexploring closed 1 year ago
Hi, if the visualization result is correct but the IoU metric is strange, this may be because the format of GT or prediction during testing is incorrect.
You can check the input when testing the metric. For example, we compute IoU this line. masks_hq
has a range of (-inf,+inf), labels_ori
has a range of [0,255]. You can check the range of some variables, or visualize some intermediate variables to determine whether there is a problem with the format.
@ymq2017 , Thanks.
I found that the key is in the line, which states that the input image dimensions must be [1024,1024]. If I modify the "input_size" to any other value, the IoU will be incorrect.
Thank you for your excellent work. I have gained a lot of inspiration.
After fine-tuning my downstream task data with HQ-SAM, there has been a significant improvement in the accuracy of HQ-SAM.
However, the
val_iou_0
accuracy of HQ-SAM during training is very low (During training, there is only one object in each image), even when the training epoch is increased to 120. May I know the reason for this?Below is the detailed training log:
Thanks.