srinidhiPY / SSL_CR_Histo

Official code for "Self-Supervised driven Consistency Training for Annotation Efficient Histopathology Image Analysis" Published in Medical Image Analysis (MedIA) Journal, Oct, 2021.
https://doi.org/10.1016/j.media.2021.102256
MIT License
62 stars 21 forks source link

Some questions about the experimental results of the CRC dataset #6

Closed junjianli106 closed 2 years ago

junjianli106 commented 2 years ago

Thank you for your excellent paper and open source code. I have some questions about the experimental results of NCT-CRC.

  1. The MoCo + CR approach obtains a new state-of-the-art result with an Acc of 0.990, weighted F-1 score of 0.953 and a macro AUC of 0.997, compared to the previous method ( Kather et al., 2019 ) which obtained an Acc of 0.943. However, using random initialization can get 97.2% acc with 10% training data in Table 5, which is also much higher than 0.943 of (kather et al., 2019), random initialization can also get high ACC, did I miss something?

Table 5 presents the overall Acc and weighted F 1 score ( F 1 ) for classification of 9 colorectal tissue classes using different methodologies. On this dataset, the MoCo + CR approach obtains a new state-of-the-art result with an Acc of 0.990, weighted F-1 score of 0.953 and a macro AUC of 0.997, compared to the previous method ( Kather et al., 2019 ) which obtained an Acc of 0.943.

  1. When I train the CRC dataset, the difference between my weighted F1 and ACC was not as great as yours(Acc :0.990, weighted F-1: 0.953), for example, ACC:0.9400, weight-F1:0.9399 , did I miss something?
srinidhiPY commented 2 years ago

Hi Junjian Li,

Thank you for showing interest in our work.

Here is the answer to your questions:

1) Yes, random initialization also gets higher Accuracy compared to Kather et al, as per our experiments. Since, this task was one of the easiest due to its well curated dataset, most methods perform eqUally well.

2) did you use the same hyper-parameters as reported in our paper ? You can reproduce the results from our code and let me know if you have any questions.

On Sat, 17 Sep 2022 at 11:29 PM, junjianli @.***> wrote:

Thank you for your excellent paper and open source code. I have some questions about the experimental results of NCT-CRC.

  1. The MoCo + CR approach obtains a new state-of-the-art result with an Acc of 0.990, weighted F-1 score of 0.953 and a macro AUC of 0.997, compared to the previous method ( Kather et al., 2019 ) which obtained an Acc of 0.943. However, using random initialization can get 97.2% acc with 10% training data in Table 5, which is also much higher than 0.943 of (kather et al., 2019), random initialization can also get high ACC, did I miss something?

Table 5 presents the overall Acc and weighted F 1 score ( F 1 ) for classification of 9 colorectal tissue classes using different methodologies. On this dataset, the MoCo + CR approach obtains a new state-of-the-art result with an Acc of 0.990, weighted F-1 score of 0.953 and a macro AUC of 0.997, compared to the previous method ( Kather et al., 2019 ) which obtained an Acc of 0.943.

  1. When I train the CRC dataset, the difference between my weighted F1 and ACC was not as great as yours(Acc :0.990, weighted F-1: 0.953), for example, ACC:0.9400, weight-F1:0.9399 , did I miss something?

— Reply to this email directly, view it on GitHub https://github.com/srinidhiPY/SSL_CR_Histo/issues/6, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD4AXU5IOGPBQIIDAKG6PZ3V6YBJPANCNFSM6AAAAAAQPDAO6Y . You are receiving this because you are subscribed to this thread.Message ID: @.***>