I want to apply your self-teaching approach to a different dataset and thus have to train it myself. In order to make sure that I get everything right, I am trying to reproduce your results on KITTI first, but so far without success.
First, I generated the teacher ground-truth using a modified test_simple.py, which stores the predicted depth for all 4 scales as .npy files:
This is still far from what I get with your pre-trained network. Any idea where the problem might be?
I am using PyTorch 1.10.1 instead of 0.4, since that old version is not compatible with newer GPUs, but otherwise I am unable to find differences to what you have described in your paper and published code. Training the original Monodepth2 with the newer PyTorch version also gives nearly identical results to the published ones (e.g. AbsRel 0.090 -> 0.091, RMSE 3.942 -> 3.993), so I doubt that this would be a problem.
Hi,
I want to apply your self-teaching approach to a different dataset and thus have to train it myself. In order to make sure that I get everything right, I am trying to reproduce your results on KITTI first, but so far without success.
First, I generated the teacher ground-truth using a modified test_simple.py, which stores the predicted depth for all 4 scales as .npy files:
(Repeated the same with val_files.txt, of course.)
I then trained the student network using the following loss function:
With
use_sigmoid=False
, I get NaN loss in the last epoch. Evaluation of the second to last checkpoint (weights_18
) oneigen_benchmark
yields:With
use_sigmoid=True
, training finishes without a problem and the last checkpoint (weights_19
) yields:This is still far from what I get with your pre-trained network. Any idea where the problem might be? I am using PyTorch 1.10.1 instead of 0.4, since that old version is not compatible with newer GPUs, but otherwise I am unable to find differences to what you have described in your paper and published code. Training the original Monodepth2 with the newer PyTorch version also gives nearly identical results to the published ones (e.g. AbsRel 0.090 -> 0.091, RMSE 3.942 -> 3.993), so I doubt that this would be a problem.
Thanks!