zhyever / Monocular-Depth-Estimation-Toolbox

Monocular Depth Estimation Toolbox based on MMSegmentation.
Apache License 2.0
903 stars 104 forks source link

Question for video as custom data test input #52

Closed Daseot closed 1 year ago

Daseot commented 2 years ago

Hi. First of all, Thx for your hard work. And reveal that I'm a begginer of ML/DL. Here's a problem, I'm trying to input video(doesn't have GT) as custom data to trained model based on NYU(depth/apis/test.py) in line 88. ▶result_depth = model(return_loss=False, data) So I opened Video file using cv2.Videocapture. And convert BGR to RGB, resize, transform(totensor, compose) Then I want to input video variable to model like over code line. It didn't work so I debuged the original dictionary variable "data", it contains 2 keys(img_metas, img). and "img" has 2 tensor list but my custom input video(1 frame image) has 1 tensor list.

I also tried input just key and value of "img" except "img_metas" but as u can expect it didn't work also.

Maybe I'm asking a stupid question, is there any way to test using video input which doesn't have GT data? p.s. It worked well when I input custom image(seperated and stored frame of video I mentioned above) as test data to model refered with other closed issue.

zhyever commented 2 years ago

Thanks for your appreciation. I think you have already had a gooood start while there are several small bugs. As you can refer here, we need pass 'img' and 'img_metas' into the model's forward function. Your analysis is correct, but I wonder why 'img' has '2 tensor list'. Normally, the model takes one image at a time. Maybe you can try:

img = data['img'] # your provided img is called 'img'.
data['img']= [img] # place the img into a list 
result = model(return_loss=False, **data) ...

The reason can be referred here. There are codes preparing for TTA (test time augment). So you may need to place your img into a list before passing it to the model.

Daseot commented 2 years ago

Thank u zhyever. I’ll try it

zhyever commented 1 year ago

Hope everything goes well! I close this issue for now. Feel free to re-open it.