-
Hello guys, I'd like to use the depth map to generate point cloud, I ran the test_simple.py and checked the depth output of the network, I found that the depth seemed not in the right scale, for examp…
-
Hello,
Why is the standard Umeyama's algorithm not used for aligning the 5-frame trajectories when computing the ATE-5 metric?
-
Hello,
I added support for training monodepth2 on cityscapes, and trained it using the default hyperparameters used to train for monocular on kitti. It looks like it is having trouble masking out t…
-
The frame_id of the camera_info message published on `/camera/infra2/camera_info` is camera_infra2_optical_frame. This is not compatible with the `image_geometry::StereoCameraModel` class as [StereoCa…
-
Thank you for your open source work.
Does this code contain the part of the mapping module (from Image-Pose pair to Depth Image)?
We are trying to generate dense depth maps online with a monocular …
-
Hi,
The depth predictions are validated with sparse ground truth depths of KITTI here, but there are also other papers validating against full ground truth (filled by interpolating). Will there be a …
-
I am interested to know if you plan (and when) to publish the model trained on concatenated stereo image inputs which you have briefly mention in the paper (section 4.3)?
Btw, I really like this un…
-
Hello, Xt-Chen.
I really thank you for releasing the SARPN codes for monocular depth estimation.
Unfortunately, the pretrained model that is uploaded on the MS OneDrive may be broken.
when trying…
-
Hello,
I want to test your research on a real drone, and I'm wondering that images from monocular camera can be used on your research, because semantic segmentation can be built from RGB images, and…
-
Hello,
I am look for an implementation of this paper _Digging Into Self-Supervised Monocular Depth Estimation_
Is your code ready to run or it needs further development?
Could you provide some do…