Closed elephanttt closed 2 years ago
Hi, I run the code on the camelyon16 dataset, but the final results are far from what the article describes. What are the possible problems?
hi, for the learning rate, the classification loss is multiplied by 10. it is hard-coded here https://github.com/sbelharbi/deep-wsl-histo-min-max-uncertainty/blob/29464285eabfa0914ff0bd79f2b6123c5dba7e1a/deepmil/criteria.py#L186
among other things. this code is not up to date because it does not support directly camelyon16 as it has negative samples and they need to be considered differently as they dont have roi.
i'll try to fix this this week-end. busy with a deadline. thanks
i'll work on this after the 17th. thanks
hi, sorry for delay. i updated master branch to support camelyon16. see readme for the commands. i didnt test this separated code. please let me know if you face any errors.
thanks
hi,i have a question is, for the camleyon dataset, it seems that max entropy loss (SEM/EEM) is not used.
hi, we cant maximize entropy over these samples. negative regions belong to one of the classes. only classification and size terms are used. only when negative regions are background, we can apply max-entropy, such as in glas.
thanks
thanks for your answer. (^▽^)
Hi, I noticed in the paper, you mentioned that on the camelyon16 dataset, the learning rate of classification part is multiplied by 10 until reaching 0.01, Where does it reflect in the code part?