-
I have been trying to train a new ZoeDepth_N model on the NYUv2 dataset with the more efficient DPT_SwinV2_L_384 MiDaS backbone for real-time performance. However, it is not clear from the current doc…
-
Hello, i want to ask for help , anybody does train pointnet on NYU v2 dataset?
Can u give some idea when you train it?
Thanks !
-
I'd like to share some code.
```matlab
nyufile = fullfile('dataset/', 'nyu_depth_v2_labeled.mat');
outDir = fullfile('output/', 'sgupta', 'datasets', 'nyud2', 'datacopy');
mkdir(fullfile(outDir,…
-
Command windows output:
process_raw
590
1
basement_0001a
0
0
Found 0 depth, 0 rgb images, and 0 accel dumps.
filecount:1
filecount to process:1
Er…
-
Dear DINOv2 team, thank you for this amazing work! If I am correct, I only found the whole pipeline of linear probing for classification on ImageNet in [https://github.com/facebookresearch/dinov2/blob…
ywyue updated
7 months ago
-
-
Hello
It is great work.
I am using your model to calculate depth and from depth i am calculating point cloud.
So when i move the camera forward i expect that the point cloud will also come closer t…
-
when I use cpu run this code,load the best model you give,the result of my eval sample looks very bad.The data I use is nyu_depth_v2_labeled.mat,and I convert the data to image and depth npy file and …
-
Hi there!Thanks for the great work!
I have some questions about data sets and training results.
The first is the kitti dataset. I see in the README file that you use the raw portion of the kitt…
-
Can you provide the evaluation metric that generate exactly result with the paper?
Here is my "bicubic" method and evaluation metric matlab code, but it has a little difference with the paper score:…