-
Can you provide the evaluation metric that generate exactly result with the paper?
Here is my "bicubic" method and evaluation metric matlab code, but it has a little difference with the paper score:…
-
Excuse me, I have download your repo, but there is not a guide, so I don't know how to use this repo, which file should I run first? And where should I put the NYU dataset?
-
Command windows output:
process_raw
590
1
basement_0001a
0
0
Found 0 depth, 0 rgb images, and 0 accel dumps.
filecount:1
filecount to process:1
Er…
-
I was trying to apply this model to my own data and not getting good results. I ran the NYUv2 dataset through my code, and the results seem to be in line with those reported in the ViT-Lens paper.
…
-
i get a depth map from bts_test.py and finally i want to calculate real depth from the depth map when i know max_depth, min_depth in the real world.
do you have related codes about that?
-
Dear DINOv2 team, thank you for this amazing work! If I am correct, I only found the whole pipeline of linear probing for classification on ImageNet in [https://github.com/facebookresearch/dinov2/blob…
ywyue updated
9 months ago
-
Hi there!Thanks for the great work!
I have some questions about data sets and training results.
The first is the kitti dataset. I see in the README file that you use the raw portion of the kitt…
-
Because when I test the SUN RGB-D database, the test results are different
So I want to ask about the evaluation settings on the SUN RGB-D dataset.
Question 1 When testing SUN RGB-D, has the maximum…
-
HI @LiheYoung @1ssb, I tried using the depth_to_pointcloud script as it is for estimating depth for some rgb images for which I do have pixel-wise ground truth. As expected, because I used the pre-tra…
-