Closed hutuo1213 closed 11 months ago
Hi @yaoyaosanqi,
There is some randomness in the optimiser as well.
The model at the start of training always yields the same performance because the randomly initialised parameters are determined by the seed. In the training, however, there are other things that could also introduce randomness. You can refer to this PyTorch page for more details.
Cheers, Fred.
Has PVIC encountered this issue before? We're trying to understand the source of this randomness. Is it due to PVIC or the modifications we made to the model?
It's always been like this.
As I recall, UPT is completely reproducible. So it's weird PVIC. Thankfully, its performance fluctuates very little. Thank you very much for your guidance.
Hi, We found that the random seed in the PVIC code fixes the first test result, but subsequent training results produce variations. Here is what happens when the same code is run twice.