dontLoveBugs / FCRN_pytorch

Pytorch Implementation of Deeper Depth Prediction with Fully Convolutional Residual Networks
64 stars 15 forks source link

Not getting good results after training on own dataset #5

Open abdur4373 opened 5 years ago

abdur4373 commented 5 years ago

Hello @dontLoveBugs Actually I have prepared my own data set of indoor scene in my environment and want to train model on that. I am freezing all other layers except for the up projection blocks and the result is not so good. Even I trained it on as small data set as 600 images and achieved 82 percent accuracy but the results were not good visually. I donot know the reason of that maybe you can suggest me something. And the images I want to train are approximately 6k. The pretrained weights with NYU are even performing better. batch_size = 32 learning_rate = 1.0e-3 monentum = 0.9 weight_decay = 0.0005 num_epochs = 70 optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=learning_rate, momentum=monentum, weight_decay=weight_decay) and lr is halved after 10 epochs.

MS_LAB_269_unfilled

Validation depth image Screenshot from 2019-06-27 21-37-28

rgb image Screenshot from 2019-06-27 21-44-59

dontLoveBugs commented 5 years ago

Maybe your dataset is too small. If you want to train your model on a new indoor scene dataset, I think to finetune the pretrained model of NYU dataset in your indoor scene dataset is a feasible approach. I don't know your "accuracy" means what, "rml", "rmse" or "pixel accuracy"? Besides, the depth range of your test image seems not large, which may cause the visualization result not obvious.