-
Hi there,
I have a question regarding MVSFormer++. Is it possible to input a mask along with the images when running the model? Specifically, I would like to use the pre-trained weights as they are…
-
Hello, thanks for your awesome work. I have one question.
Currently, I am using your codebase with single-view input, and I’m not doing depth joint training; instead, I’m calculating the depth sep…
-
Hi there,
I'm interested in using your pipeline for camera calibration, undistortion, rectification and finally depth map estimation. You have a great data set for camera calibration, I'm wondering w…
-
Useful codes for depth estimation:
- [https://github.com/lppllppl920/EndoscopyDepthEstimation-Pytorch](url)
![Image](https://user-images.githubusercontent.com/71411474/244472903-6ff3a0dd-e5f8-47d5-a…
-
Hello, I am very interested in your article and code, and I have read the code carefully,
But I tried to use the depth map generated by the 64-line ground truth to correct the depth estimation re…
-
Hello,
Thank you very much for your work; it truly produces spectacular depth images.
I am trying to generate the point cloud and wondering if I have done it correctly. I have obtained the follo…
-
Hi,
I use lidar scan to generate depth images. As the lidar is less dense than the image resolution, a large part of my depth images are empty and filled with 0.
In urban_radiance_field_depth_loss…
-
Hello great work @xy-guo and team!
I have stereo images and depth maps from zed camera for a custom dataset(realistic).In place of kitti scene flow dataset, I thought to initially train the stereo ne…
-
I am trying to train a model for depth estimation through forward rendering, and run into the issue as shown in the image below.
![image](https://user-images.githubusercontent.com/19423039/180646489-…
-
Hello, @jaruanob!
First of all, thank you for sharing your astonishing research.
I've noticed that the virtual dataset you've released currently includes depth information.
I was wondering …