Closed fzualan2023 closed 2 months ago
A1: We have revised the comments of LESPS_Tloss for a default value of 10, very sorry for the typos. For the reason of LESPS_Tloss, we explain in the last paragraph of section 4 of our paper. The corresponding experiments for optimal LESPSTloss are illustrated in section 5.2.2. ``To determine the exact epoch number to start label evolution, we employ a network-independent loss-based threshold $T{loss}$ instead of a network-dependent epoch-based threshold to start label evolution. That is, we start the label evolution when the loss of the last epoch is lower than $T_{loss}$, and then perform label update every $f$ epochs until the end of training. Note that, our loss-based threshold is a coarse threshold to ensure that networks attend to targets instead of background clutter."
A2: In the training code, we calculate and print the pixel accuracy (PA) and intersection over union (IoU) between the evolved mask and the GT mask as follows:
To ignore the process, you can copy the point annotation (masks_centroid
/ masks_coarse
) to mask annotation (masks
) and ignore the intermediate PA, IoU results.
Dear LESPS Developers,
First of all, thank you for your excellent work on the LESPS framework. I have been working with it on the ACM and SIRST3 datasets and have a couple of questions that I hope you can help clarify.
LESPS_Tloss Setting During Training In the training process of LESPS, there seems to be some confusion regarding the setting of LESPS_Tloss. The comments suggest a default value of 0.5, but the initial value in the source code is 10. Moreover, in my experiments with ACM and SIRST3 for centroid training, it took around 300 epochs for the loss to drop to 1. Could you please explain the rationale behind setting LESPS_Tloss? Should we wait for the training to reach a specific low value of LESPS_Tloss before starting label evolution, or is there another criterion we should follow?
Missing Masks during SIRST3 Data Construction Based on my understanding, after sufficient training with point annotations, the framework evolves pseudo masks for further refinement, suggesting that full masks might not be necessary if centroid annotations are provided. However, when constructing the SIRST3 dataset, I did not include full masks (only centroid annotations were provided), which led to the following error:
Could you please clarify if full masks are mandatory even in the centroid-only training process, or if there's a recommended workaround to proceed without them?
Thank you for your time and assistance.