nianticlabs / monodepth2

[ICCV 2019] Monocular depth estimation from a single image
Other
4.12k stars 952 forks source link

Training on NYUdataset #88

Closed kou5321 closed 4 years ago

kou5321 commented 4 years ago

Hi! I want to train the model on NYUdataset. And I have written the own split as well as creating a new dataset subclassing from MonoDataset. For convenience, I just took 1000 pictures for training and 700 pictures for validation. I have moved the weights to the right place. I didn't use the finetune instruction.

And when use test_simpy.py to test on single picture.It turn outs characteristic from KITTI. You see, there is a boundary across the picture, which always happens in KITTI. I wonder if the model use specific methods or just my weights not initialized.

The test instruction I use is python test_simple.py --image_path assets/cafe_0001a1rgb.png --model_name mono_640x192 --ext png

part of the result is: cafe_0001a1rgb_disp caffe_disp cafe_0001a1depth

kou5321 commented 4 years ago

cafe_0001a1rgb The original picture is here. And in the former one, the third picture is the real depth.

Many thanks in advance.

mdfirman commented 4 years ago

Hi @kou5321 , thanks for your interest! Can you please clarify: Is the model you used here trained on NYU at all? Or are you just using our KITTI models? If you are finetuning: Are you finetuning using a supervised loss, or self-supervised?

Thanks

kou5321 commented 4 years ago

Thanks for your quick response. Well, I think I have trained on NYU totally, though the result seems to be finetuned on KITTI. the training instruction I use is that:

python train.py --dataset nyudataset --data_path /media/kou/kou/monodepth3/nyu --split nyu_split --model_name mono_640x192 --height 640 --width 192 --batch_size 11

And I have added adam.pth, ecoder.pth etc into mono_640x192 which in the "models" file

mdfirman commented 4 years ago

Ok – so you are doing monocular training on the NYU video sequences?

I would expect that this is a difficult dataset to train on compared to KITTI; the horizontal field of view is smaller, the object types are more varied and the camera motion less predictable. It's also something we have never attempted here, so we can't really provide any direct help. You might want to try using more images for training, and perhaps experiment with the frame offset (i.e. how many steps forward/backward you sample frames from in the sequence).

Thank you for trying this out, and good luck with your future experiments.

kou5321 commented 4 years ago

Thanks for your time and detailed explanation!

CaptainEven commented 3 years ago

@kou5321 May i ask how to set self.K and baseline scaling factor on NYU dataset?

LulaSan commented 3 years ago

@kou5321 Hi! Did you were able to do the train on NYUDepth successfully?

kou5321 commented 3 years ago

Yes, I trained it in NYU dataset, which is a sequece of pictures. I'm not sure if it is specific name is NYUDepth.

------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2021年7月22日(星期四) 晚上11:04 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [nianticlabs/monodepth2] Training on NYUdataset (#88)

@kou5321 Hi! Did you were able to do the train on NYUDepth successfully?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

LulaSan commented 3 years ago

@kou5321 did you train to obtain only disparity or also depth?

vamWu commented 1 year ago

@kou5321 Sorry to bother you. I've also been using monodepeth2 to train on the NYU Depth V2 dataset recently, but have run into some problems. I would like to ask you what your final results were using monodepth2 on this? Are the depth estimation results satisfactory?

kou5321 commented 1 year ago

Hi niuyi, I trained this network three years ago and it's hard for me to remember details. But of course I remembered some of the details of  the result. There is a apparent bord line in the result. We guess it's becuase the pretrained model using KITTI dataset, which often have potential assumption there is a skyline.

------------------ 原始邮件 ------------------ 发件人: "nianticlabs/monodepth2" @.>; 发送时间: 2022年11月7日(星期一) 晚上7:02 @.>; @.**@.>; 主题: Re: [nianticlabs/monodepth2] Training on NYUdataset (#88)

@kou5321 Sorry to bother you. I've also been using monodepeth2 to train on the NYU Depth V2 dataset recently, but have run into some problems. I would like to ask you what your final results were using monodepth2 on this? Are the depth estimation results satisfactory?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>