Closed ziao-guo closed 2 years ago
We have resovled this issue.
The occurance of the over-fitting is caused by manual_seed. We intend to fix test data to prevent test variance by setting an identical seed before evaluation in each epoch. The seed will also influence the training data. We reset the seed to make it relative to the number of epoch, and reload the dataloader for training after each evaluation to eliminate the influence.
Please refer to my recent commit, and the over-fitting issue on master
branch shall be fixed now. If you have any further questions, feel free to open new issues or reopen this issue.
This is a reminder for our users that we have noticed that over-fitting sometimes occurs during training after we switched our code to pygmtools.
If you are also facing this issue, please switch from themaster
branch to v0.1.0.We are working to resolve it. We hope there will be a fix very soon.UPDATE 22/03/25: Please update the code by
git pull origin master
to fix this issue.