-
I load the model "ZoeD_M12_N.pt", and run the general following code with little modification
, to save metrics depth map first.
```
import torch
import os
import datetime
import numpy as np…
suzdl updated
5 months ago
-
Could you also share a pre-trained model (trained on e.g. NYUv2)?
Axe-- updated
4 years ago
-
Hi aimerykong:
The error are as follows
错误使用 dagnn.Layer/load (line 200)
No property `bnorm_moment_type_trn` for a layer of type `dagnn.BatchNorm`.
出错 dagnn.DagNN.loadobj (line 27)
block.l…
-
Hi,Thank you for providing this exploratory work on diffusion on visual perception tasks.
When I train on the NYUv2 dataset, I find that it converges slowly, and I wonder if there is something wron…
-
-
I was trying to apply this model to my own data and not getting good results. I ran the NYUv2 dataset through my code, and the results seem to be in line with those reported in the ViT-Lens paper.
…
-
@laughtervv I use the model you upload 4 days ago, and the val
result color is different from the lable, they can't match.
![out_train_15_764_gt](https://user-images.githubusercontent.com/10106222/…
-
当我运行python train.py --opt options/train/NYUv2_BSR/train_stage3.yml时报错
raise ValueError("Expected more than 1 value per channel when training, got input size {}".format(size))
ValueError: Expecte…
-
I downloaded NYUv2 from your link. However, I found that the filled depth map is the same as the raw depth map. Is there any problem?
I am new to this area, I wonder if people just calculate metric…
-
I am experiencing the same problem during the use of convertRGBD.m, can you help with a workaround?Please!