nyukat / GMIC

An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization
https://doi.org/10.1016/j.media.2020.101908
GNU Affero General Public License v3.0
163 stars 48 forks source link

Fine Tunning #4

Closed joaco18 closed 4 years ago

joaco18 commented 4 years ago

Hi again! In order to fine-tune your model with my own images, I would like to know the exact hyperparameter combination that was used in each of the five models from which you are sharing their weights. In your paper, you present the range of values that were used in the random search, and that you chose the best-performing 5 models. Could you share those hyperparameters? Thanks!

seyiqi commented 4 years ago

Hi, All of these 5 models use ResNet-22 for the global network and ResNet-18 for the local network.

Here are the hyper-parameters: model #1: beta = 3.259162430057801e-06, percent_t = 0.02, learning_rate = 4.134478662168656e-06 model #2: beta = 0.00022798001830417919, percent_t = 0.03, learning_rate = 1.16455071000344e-05 model #3: beta = 0.000141576231515925, percent_t = 0.03, learning_rate = 3.2241692967582674e-05 model #4: beta = 9.407165831071028e-06, percent_t = 0.05, learning_rate = 1.4603871086020163e-05 model #5: beta = 4.277941478680878e-05, percent_t = 0.1, learning_rate = 3.525084901697994e-06

Hope this could help :)