Open FredySu opened 2 weeks ago
Hi FredySu,
We usually have the epochs to a high number and then just stop it when the plateau is reached. So it will never reach 100.000. I remember that it took some days to train maybe even a week or so. 15min seems quite long to me though. Maybe @guybenyosef can also comment.
Thanks for the remark about the package, I will check it.
Dear authors, thanks for your great work.
I am a beginner of GCN. I noticed that the config file "Train_multi_frame_SD.yaml" takes 100000 epochs run. Is it the default config to reproduce result in your paper? I want to know how long do you need to run the code? It seems that in 1*2080Ti, 1 epoch needs 15min, 100000 epochs really needs a longtime. Is it right? Or do i missed something important?
Also, maybe you can add the package version of Albumentations to environment section, it seems that some methods/parameters of Albumentations func have changed, version degrade maybe needed.