lorenmt / mtan

The implementation of "End-to-End Multi-Task Learning with Attention" [CVPR 2019].
https://shikun.io/projects/multi-task-attention-network
MIT License
670 stars 109 forks source link

Code modifications to Cityscapes dataset #6

Closed ghost closed 5 years ago

ghost commented 5 years ago

Hi, @lorenmt Thanks for releasing cityscape dataset. Now NYUV2 dataset is not compatible to cityscapes dataset since normals data hasn't been uploaded. In this image-to-image methods, should i need to major changes in architecture(CNN N/w) since Surface Normal dataset is not available for cityscapes. Kindly suggest how to go ahead to solve this problem. Please help

lorenmt commented 5 years ago

The official Cityscapes dataset does not include normal labels. However, you can also calculate that using the derivative of depth label if you want. If you just need to use the labels from my provided version, you can just remove the normal predictions from each model as well as the corresponding loss function. You should also need to write your own dataloader to load the Cityscapes dataset. I have provided the dataloader for NYUv2, and I think it's quite straightforward to follow the format from NYUv2 to write your own Cityscapes dataloader. Hope that helps.

ghost commented 5 years ago

Made modifications to the Cityscape dataset by creating dataloader and eliminate normal computation and its loss. The codes are running but where the models are getting saved and how to do inferencing with the svaed models. Ima using model_segnet_mtan.py. Kindly help

lorenmt commented 5 years ago

Could you elaborate your problem? What do you mean inferencing with the saved model? If you want to save a model please check the pytorch documentation website.

ghost commented 5 years ago

Thanks for your kind reply. Im beginner to deep learning and also with pytorch. The script "model_segnet_mtan.py" i guess is a training code for NYUv2/Cityscapes dataset.

Problem elaboration: I was expecting the training code will dump pytorch checkpoints. Inferencing code which loads checkpoints along with one image and output will be one segmentation image + depth image.

In your repository the script has "model_segnet_mtan.py" doesn't dump any checkpoints. Pls correct me i'm wrong

lorenmt commented 5 years ago

Sorry, I don’t understand your question. What do you mean by dump checkpoints?

On Tue, Jun 25, 2019 at 17:33, sankhanrsss notifications@github.com wrote: Thanks for your kind reply. Im beginner to deep learning and also with pytorch. The script "model_segnet_mtan.py" i guess is a training code for NYUv2/Cityscapes dataset.

Problem elaboration: I was expecting the training code will dump pytorch checkpoints. Inferencing code which loads checkpoints along with one image and output will be one segmentation image + depth image.

In your repository the script has "model_segnet_mtan.py" doesn't dump any checkpoints. Pls correct me i'm wrong

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub [https://github.com/lorenmt/mtan/issues/6?email_source=notifications&email_token=ABP2GH4LG4PZ6THTAVK3TNLP4K2OJA5CNFSM4H3FD4J2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYR66OY#issuecomment-505671483] , or mute the thread [https://github.com/notifications/unsubscribe-auth/ABP2GH476GAZTCW44RWE3KDP4K2OJANCNFSM4H3FD4JQ] .

ghost commented 5 years ago

Sorry for my poor english. The script "model_segnet_mtan.py" i guess is a training code. During training, model doesn't save any checkpoints. With checkpoints, how do we do inferencing on each input image? Is inference code is updated in repository? Kindly suggest

lorenmt commented 5 years ago

All scripts contain both training and evaluation code. If you want to save or reload the model, you need to check the pytorch documentation.