yasaminjafarian / HDNet_TikTok

MIT License
337 stars 36 forks source link

Meaning of an Epoch and Joint Training with GT Depth and TikTok #19

Closed terteros closed 3 years ago

terteros commented 3 years ago

Dear Yasamin, In the paper it is mentioned that your model is trained for 380 epoch with batch size 10. What is the number of iterations per epoch there? Did you train normal estimator and depth estimator separately beforehand? Did you use renderpeople samples and tiktok pair samples together as in the training code you provided for HDNet training? I am actually trying to estimate how much time is needed to train the pretrained model with tiktok dataset and tang dataset.

yasaminjafarian commented 3 years ago

Hi, The number of data for us is reported in the paper. So the iterations for each epoch will be around datasize/10(batch). Yes both normal and depth estimator were pretrained on renderpeople data. Yes the renderpeople and tiktok then were trained together in the semi-supervised fashion. My training of HDNet with renderpeople and tiktok data took around 3 days or so to get to around 1920000 iterations in total. I am not sure about Tang data.