Closed tim0dd closed 1 year ago
Hi dear, thank you so much for this issue I follow the setup from the previous methods. I see the 2 popular image sizes that they used are 256 or 352, so I decide to build the model base on that size. However, I know that if I can keep as same as the size of the original one, the result can be lower, but it's not too much. I also mention in the implementation detail that I used 256 sizes and follow the setup from the previous baseline in my publication, but I will take notes on this one and I may include it in this repo for better understanding. Once again, thank you so much for your support :smiling_face_with_three_hearts:
In your paper you compare yourself to FCB SwinV2, which uses a resolution of 384x384. You use this comparison to claim state-of-the-art performance. It is simply not an accurate claim.
Hi I see this issue, I will take a look and do the benchmark again and let you know. Thank you so much!
Thanks alot, I would be very interested in the results! An easy way might be to just use some upsampling algorithm from a library on your segmentation results and measure the metrics on the upscaled result. Of course this may result in worse metrics than retraining the model. I hope you understand I don't mean to belittle your work, it otherwise seems great.
Hi, I have tested again, and the result just drop slightly from 0.005-0.008 for the IOU score, I will take note of this one and try to update the benchmark in my later works. Anyway, thank you so much for raising this issue, it's my miss, I will take note and have more updates.
That's impressive, thanks!
You seem to be resizing images, as well as segmentation annotations to a much lower resolution for your evaluation than other state-of-the-art models. This makes it a lot easier to achieve high IoU and dice metrics. The 256x256 resolution should at the very least be mentioned in the publication....