Closed suyanzhou626 closed 3 years ago
Can you tell me your device setup? Also, please try our checkpoint from this URL.
Can you tell me your device setup? Also, please try our checkpoint from this URL.
pytorch: 1.8 cuda:11.0 GPU: RTX3090
I think it's pretty much not the device problem, but I also didn't have much experience with RTX3090. Also, I personally experienced poor results occasionally but it wasn't that often, so you can try another experiment with same settings. Please reopen this issue if you still have problem.
I think it's pretty much not the device problem, but I also didn't have much experience with RTX3090. Also, I personally experienced poor results occasionally but it wasn't that often, so you can try another experiment with same settings. Please reopen this issue if you still have problem.
I run the commad:
python Expr.py --config configs/UACANet-L.yaml
However, the result listed as follows: The official result of UACANet-L mentioned at README is
dataset meanDic meanIoU wFm Sm meanEm mae maxEm maxDic maxIoU meanSen maxSen meanSpe maxSpe ----------------- --------- --------- ----- ----- -------- ----- ------- -------- -------- --------- -------- --------- -------- CVC-300 0.910 0.849 0.901 0.937 0.977 0.005 0.980 0.913 0.853 0.940 1.000 0.993 0.997 CVC-ClinicDB 0.926 0.880 0.928 0.943 0.974 0.006 0.976 0.929 0.883 0.943 1.000 0.992 0.996 Kvasir 0.912 0.859 0.902 0.917 0.955 0.025 0.958 0.915 0.862 0.923 1.000 0.983 0.987 CVC-ColonDB 0.751 0.678 0.746 0.835 0.875 0.039 0.878 0.753 0.680 0.754 1.000 0.953 0.957 ETIS-LaribPolypDB 0.766 0.689 0.740 0.859 0.903 0.012 0.905 0.769 0.691 0.813 1.000 0.932 0.936
Why is the result I run so bad? I didn't change any configuration file.
hello,sorry to bother you. I just run the code as you say in the readme. But the rusult is also not that good. The only thing I changed is the batchszie,which is 8 in my code.The following is the evaluation result. So I don't know where is the problem.
Hi, thank you for having more question about our work. After I've got your email I run on my own machine with TITAN RTX (24GB) and got these results.
Expr - UACANet-L 100% | ██████████████████████████████████████████████████ | 5/5 [49:08<00:00, 589.76s/it] dataset meanDic meanIoU wFm Sm meanEm mae maxEm maxDic maxIoU meanSen maxSen meanSpe maxSpe |
---|
CVC-300 0.902 0.835 0.884 0.933 0.972 0.006 0.975 0.906 0.838 0.962 1.000 0.992 0.995 CVC-ClinicDB 0.932 0.884 0.931 0.945 0.979 0.007 0.982 0.935 0.888 0.946 1.000 0.992 0.996 Kvasir 0.905 0.851 0.896 0.915 0.947 0.025 0.950 0.907 0.853 0.909 1.000 0.985 0.989 CVC-ColonDB 0.762 0.690 0.755 0.842 0.874 0.034 0.876 0.765 0.692 0.771 1.000 0.931 0.934 ETIS-LaribPolypDB 0.714 0.645 0.691 0.829 0.845 0.016 0.847 0.717 0.647 0.760 1.000 0.905 0.909
Results shows some difference between the results from my paper but overall, plausible while yours aren’t. Here’s what I thought. Polyp Segmentation task has small number images to train according to PraNet, the original work of mine. So, it varies the results quite a lot. I’d personally did almost 10 different experiments with the same setting to obtain best results from the paper.
However, since you’d run the same setting of mine twice and got similar results which are seems to be quite different than mine when it comes to the paper results or my very recent experiment shown above, I think it’s the number of batch sizes then. I would recommend you to change the number of batch size into at least 16. I will also do some more experiments to make sure that the problem comes from batch size and let you know the results.
Hi, I had two additional experiments with batchsize 16 and 8 and here are the results.
[batchsize 16]
[batchsize 8]
I never knew that small batchsize would affect ETIS dataset this much, but turns out it does. Let me know if you still have problems after increasing the batch size. Until then, I'll reopen this issue for convenience
Thank you a lot for the help. I just dive into this area for a while. Due to the limitation of my device, I can only run the code if the batch size is 8. When the batch size is changed to 16, CUDA out of memory. Setting the batch size to 8, I will try to run the code again. I want to check it again with the same condition. Again, thanks for your help.
Taehun Kim @.***> 于2021年9月24日周五 上午10:08写道:
Reopened #2 https://github.com/plemeri/UACANet/issues/2.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/plemeri/UACANet/issues/2#event-5353351382, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARPRTJAKVCMQED2ESZXO2VLUDPMRJANCNFSM5B6TV6XA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Hello, I run it again with the batch size equal to 8. Here is the result. It seems it improved somewhere. Thanks again.
Hi, In place of email, I am writing my comment here (others can also take advantage)[as I am also facing the same problem like above comments]. Now, I am running three experiments with (1) batch size 32, 24 GB RAM, titanrtx, total nodes=3 (2) batch size 32, 24 GB RAM, titanrtx, total node=1) (3) batch size 32, 24 GB RAM, titanx, total node=2). I will update you on the results soon.
Following are the results: dataset meanDic meanIoU wFm Sm meanEm mae maxEm maxDic maxIoU meanSen maxSen meanSpe maxSpe
CVC-300 0.909 0.846 0.895 0.937 0.977 0.005 0.980 0.913 0.850 0.960 1.000 0.992 0.996 CVC-ClinicDB 0.923 0.878 0.926 0.938 0.971 0.007 0.974 0.927 0.881 0.931 1.000 0.993 0.997 Kvasir 0.898 0.844 0.891 0.910 0.941 0.028 0.944 0.901 0.847 0.899 1.000 0.974 0.978 CVC-ColonDB 0.741 0.672 0.734 0.829 0.856 0.038 0.859 0.743 0.674 0.759 1.000 0.914 0.918 ETIS-LaribPolypDB 0.684 0.617 0.659 0.812 0.863 0.019 0.865 0.686 0.619 0.733 1.000 0.849 0.853
dataset meanDic meanIoU wFm Sm meanEm mae maxEm maxDic maxIoU meanSen maxSen meanSpe maxSpe
CVC-300 0.909 0.846 0.895 0.937 0.976 0.005 0.979 0.912 0.849 0.958 1.000 0.992 0.996 CVC-ClinicDB 0.933 0.886 0.933 0.943 0.981 0.006 0.984 0.936 0.889 0.946 1.000 0.992 0.996 Kvasir 0.901 0.847 0.890 0.910 0.947 0.028 0.950 0.904 0.850 0.907 1.000 0.981 0.985 CVC-ColonDB 0.756 0.687 0.749 0.837 0.872 0.036 0.875 0.758 0.689 0.766 1.000 0.911 0.914 ETIS-LaribPolypDB 0.713 0.641 0.690 0.829 0.857 0.012 0.859 0.715 0.644 0.751 1.000 0.888 0.891
These are still not as claimed on paper (however, there is an improvement with batch size 32).
I'm closing this issue since there are no other updates.
I run the commad:
However, the result listed as follows: The official result of UACANet-L mentioned at README is
Why is the result I run so bad? I didn't change any configuration file.