Jiangbo-Shi / ViLa-MIL

ViLa-MIL: Dual-scale Vision-Language Multiple Instance Learning for Whole Slide Image Classification (CVPR 2024)
28 stars 1 forks source link

Configuration to Reproduce the Results #11

Open NguyenAnhTien opened 1 month ago

NguyenAnhTien commented 1 month ago

To whom it may concern,

I tried to reproduce the performance you reported in the paper. I used the hyper-parameters you mentioned in README.md. However, the performance was around 72% for lung cancer. Would like to publish the hyper-parameters you used to achieve the performance? such as optimizer, learning rate, number of epochs. It would be a great help for me to reproduce the results.

I am looking forward to hearing from you.

Best regards, Tien Nguyen.

Jiangbo-Shi commented 1 month ago

Thanks. For the few-shot setting, the model's performance is easily affected by the limited training samples. You can run the experiment with different training data. Additionally, in various scenarios, you can also try running the baseline results. Our method should also outperform the baseline.

NguyenAnhTien commented 1 month ago

May you provide the hyper-parameters you used? For example, Did you use Adam or SGD optimizer for your training? or How many epochs your trained your model?