-
ghost updated
4 years ago
-
I tried to run python ’bts_test.py arguments_test_nyu.txt‘, but returned 'No module named 'bts_nyu_v2_pytorch_att''
-
when I use cpu run this code,load the best model you give,the result of my eval sample looks very bad.The data I use is nyu_depth_v2_labeled.mat,and I convert the data to image and depth npy file and …
-
Hello
It is great work.
I am using your model to calculate depth and from depth i am calculating point cloud.
So when i move the camera forward i expect that the point cloud will also come closer t…
-
Hi phoenixnn,
In get_dmap_f.m, it seems like dmap_f is not well defined. What is the dmap_f represent of ?
-
I have been trying to train a new ZoeDepth_N model on the NYUv2 dataset with the more efficient DPT_SwinV2_L_384 MiDaS backbone for real-time performance. However, it is not clear from the current doc…
-
Dear DINOv2 team, thank you for this amazing work! If I am correct, I only found the whole pipeline of linear probing for classification on ImageNet in [https://github.com/facebookresearch/dinov2/blob…
ywyue updated
9 months ago
-
Hi there!Thanks for the great work!
I have some questions about data sets and training results.
The first is the kitti dataset. I see in the README file that you use the raw portion of the kitt…
-
Dear authors,
I'm trying to reproduce your results on NYU Depth V2 dataset, but I'm facing some problems regarding the evaluation results, both retraining the network from scratch and using your p…
-
Thank you for releasing the code of this great work. Can you tell me how did you get your training data of NYU? I don't think it's from the official website, since there are only 1449 labeled RGBD pai…