-
Hello,
I want to test your research on a real drone, and I'm wondering that images from monocular camera can be used on your research, because semantic segmentation can be built from RGB images, and…
-
Hello,
I am look for an implementation of this paper _Digging Into Self-Supervised Monocular Depth Estimation_
Is your code ready to run or it needs further development?
Could you provide some do…
-
Hi @fangchangma
Using good RGB camera and solid-state Lidar, could it be used for live applications also?
If yes, considering a good computer system(say i7-8cores | GeForce GTX 1050 Ti) what ad…
JeyP4 updated
5 years ago
-
I want to use the model estimate the depth of gray images from stereo gray camera.
-
##### System information (version)
- OpenCV python => 3.4.2
- Operating System / Platform => Windows 10 64 Bit
##### Detailed description
I am working on the problem of reconstruction a vehicle tr…
-
Dear author.
Thanks for sharing your code.
I just wonder, does depth prediction using GAN can get higher accuracy than
CNN method like 'mrharicot'?
Thanks.
-
Hi!
First of all thank you for sharing your findings. I ran the single image test and got output in "npy" format. Can you please provide details about the scale of the output and whether it give real…
-
I was wondering if your network output for "self-supervised Stereo Training" is disparity or inverse depth?
"With known focal length and camera baseline the predicted disparity map, i.e. the invers…
-
I was wondering how is your network trained with stereo and monocular data combined.
Is it the way that is like fine-tune or merging losses?
Thanks
-
Hi,
I have a question about the process in the pose network. The pose network takes time sequence images as inputs and those images are separately distributed into three encoder network. So my quest…