Open xinyangATK opened 6 months ago
Hello, the number of epochs is large but we don't train until the end. We simply stop the run when validation metrics have converged.
Le jeu. 21 déc. 2023 à 06:31, Xinyang Liu @.***> a écrit :
Hello, thanks for your great work! I am trying to reproduce the experiment on comm20 dataset with DiGress. I'm curious about why the number of epochs is so large (1000000) while training the model on comm20. In contrast, training DiGress on other dataset only need epochs far smaller than this. Is there a special case on comm20?
Thanks a lot if you can give me some advice!
— Reply to this email directly, view it on GitHub https://github.com/cvignac/DiGress/issues/77, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEJOOTXI6FZFTMYO3VITEELYKPCT5AVCNFSM6AAAAABA53LWVSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGA2TCNZSGUYDQOI . You are receiving this because you are subscribed to this thread.Message ID: @.***>
-- Clément Vignac
Thanks for your quick reply.
Hello, thanks for your great work! I am trying to reproduce the experiment on comm20 dataset with DiGress. I'm curious about why the number of epochs is so large (1000000) while training the model on comm20. In contrast, training DiGress on other dataset only need epochs far smaller than this. Is there a special case on comm20?
Thanks a lot if you can give me some advice!