AutoAILab / DynamicDepth

Official implementation for ECCV 2022 paper "Disentangling Object Motion and Occlusion for Unsupervised Multi-frame Monocular Depth"
127 stars 9 forks source link

Training from scratch getting worse results #7

Open JunjieLiuSWU opened 2 years ago

JunjieLiuSWU commented 2 years ago

Hello, thank you for your sharing. I trained from scratch on your provided Cityscapes data for 20 epochs, but I got a worse result than manydepth, I wonder is DynamicDepth must be trained from pre-trained models? and how many epochs have you trained?

fengziyue commented 2 years ago

Hi @JunjieLiuSWU :

It does not have to be trained from Manydepth pre-trained models.

However, training from scratch does need more hyperparameter tuning. Our model could be considered as Manydepth plus several improvements(DOMD, Occlusion-aware cost-volume/loss). We find out that first training the Mandydepth and then enabling our improvements later will make the training more stable. Actually, Manydepth itself is hard to re-produce from scratch(see here). So we recommend using their pre-trained model.

The configuration in the option.py is for training from Mandydepth pre-trained model. If you want to train from scratch, you can try to first disable our DOMD module, train for 20~40 epochs, then enable them and train for several epochs.

WangXuCh commented 1 year ago

Hi @JunjieLiuSWU :

It does not have to be trained from Manydepth pre-trained models.

However, training from scratch does need more hyperparameter tuning. Our model could be considered as Manydepth plus several improvements(DOMD, Occlusion-aware cost-volume/loss). We find out that first training the Mandydepth and then enabling our improvements later will make the training more stable. Actually, Manydepth itself is hard to re-produce from scratch(see here). So we recommend using their pre-trained model.

The configuration in the option.py is for training from Mandydepth pre-trained model. If you want to train from scratch, you can try to first disable our DOMD module, train for 20~40 epochs, then enable them and train for several epochs.

So, how many Epochs did you train on the pre trained model provided by Manydepth, and did you freeze the teacher network (posenet and mono_depth) during this period.