Closed nankepan closed 1 year ago
The performance of different epochs in the same training session is different, and we choose the best epoch for evaluation. https://github.com/scutpaul/DANet/blob/f0bc57d9b2641c4dda9ce70e2c6f240ce2789069/train_DAN.py#L164 By setting the number of “sample_per_class”, you can adjust the number of training iterations for each epoch. You can also set random seeds to get different training results.
Maybe you misunderstood my question. I mean, I trained 4 times with same code, and use 4 model_best.pth.tar to test. There is a big performance gap between 4 model_best.pth.tar
It seems that the robustness of model is not good. There is a big performance gap between models trained by the same code. Can this prolem be fixed by increase sample_per_class?