Closed liushifu12138 closed 2 years ago
Hi, I think the second is better but the difference is not too large as far as I remember. Because of the robust learned features, our method can process a smaller depth interval at the final stage compared to the other methods.
Thank you for your answer. When I interrupted the training and resumed the training, the loss became very large, and the learning rate returned to 0.01. How to solve this situation? python train.py --resume /mnt/cds/saved/models/CDS-MVSNet/0213_150704/checkpoint-epoch11.pth
After finishing training on DTU, I fine-tuned on BlendedMVS dataset by using python train.py --resume saved/models/CDS-MVSNet/<date_and_year>/checkpoint-epoch30.pth
as described in README.md. Note that the configuration now should be changed in file config.json
with respect to your BlendedMVS. So, in your case, you should modify the learning rate in file /mnt/cds/saved/models/CDS-MVSNet/0213_150704/config.json
with a smaller value because I didn't set up to resume the learning rate from pretrained model in my code
About the loss problem, how much is it large? Can you show me some pictures?
I don't have any pictures, depth_loss of epoch11: 0.426, when I interrupt training and resume, depth_loss=0.8, I try to manually modify the learning rate and the final result will not affect, otherwise I have to start training again
@liushifu12138 , Do you mean the depth loss here is on the training set or validation set?
validation set
I think this difference is normal because when resuming the training, I only load the network weights, without loading the optimization state (you can see in _resume_checkpoint
function in base/base_trainer.py
). So the result may change. But you don't need to retrain from scratch, I think the loss will converge soon in several epochs.
Thank you very much for your answer and thanks for your project
depth_interals_ratio: [4.0, 2.0, 1.0] [4.0, 1.5,0.75] which is beter?