Closed xuqinwang closed 1 year ago
Hi, thanks for your interest in our work.
The disp_to_depth
function scales the depth predictions to the range [0.1, 100], and the same range was used for training. The predicted values are not metric depth, but as the MonoDepth2 paper said, we can get a scale factor when evaluating on the KITTI dataset. Please see the code here. You can try to get a scale factor from Nuscene's ground-truth by median scaling, and then multiply it by each value of your prediction.
This may be very inaccurate, but it's worth a try.
I am now closing this thread due to lack of response.
Hi, thank you for your excellent work! I was trying to use your pretrained weight to inference depth on Nuscene camera image, following your test_simple.py script. I got the depth using
The disparity image looks well, however the depth seems to be the wrong scale, s.t. the pseudo point cloud i got from depth is within little real-world range. Just would like to ask is there way to correct this using camera poses of consequence image frames, without training network on new dataset? Many thanks!