HHHedo / IBMIL

CVPR 2023 Highlight
72 stars 9 forks source link

Training Process Unstable #17

Closed bryanwong17 closed 8 months ago

bryanwong17 commented 8 months ago

Hi, I have tried to implement several MIL models, following a similar approach to your implementation. However, I noticed that the performance can vary significantly, up to 10%, just by changing the training seed (ex from seed 0 to 10). I suspect this is due to the small training dataset size (Camelyon16), and the final test performance is determined by the last epoch (50). Moreover, instabilities in training seem to occur, especially with the AB-MIL and DS-MIL models

HHHedo commented 8 months ago

Yes, we do observe a similar phenomenon and discuss it in the issue of dsmil. However, this is actually out of the scope of this repository.

bryanwong17 commented 8 months ago

Thank you for the confirmation