Closed shunlinlu closed 1 year ago
Hi, this happened to me as well. I use the checkpoint that has the minimum validation loss.
On Mon, Feb 27, 2023 at 6:18 AM shunlinlu @.***> wrote:
Hi, authors
Thanks for presenting such great work. I followed your instruction to train the feature extractors, the training loss reduces normally, but the val loss rises after a few iterations. The encodes seem to be OVERFITTING heavily. I use your pretrianed vae checkpoint and the only change is the batch size to 128. Here is my log.
train_text_mot_match_humanml.log https://github.com/EricGuo5513/text-to-motion/files/10840000/train_text_mot_match_humanml.log
— Reply to this email directly, view it on GitHub https://github.com/EricGuo5513/text-to-motion/issues/21, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKRYNBZJJXXMRSODK5RFK3TWZSSSXANCNFSM6AAAAAAVJMRUTM . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Hi, authors
Thanks for presenting such great work. I followed your instruction to train the feature extractors, the training loss reduces normally, but the val loss rises after a few iterations. The encodes seem to be OVERFITTING heavily. I use your pretrianed vae checkpoint and the only change is the batch size to 128. Here is my log.
train_text_mot_match_humanml.log