DrSleep / multi-task-refinenet

Multi-Task (Joint Segmentation / Depth / Surface Normas) Real-Time Light-Weight RefineNet
Other
196 stars 45 forks source link

I want to know what data Tranformations used while training #11

Open Dheeru66k opened 1 year ago

Dheeru66k commented 1 year ago

Hi I wanted to know about your dataloader part for NYUD dataset I am Normalizing and converting to tensor, in my ToTensor() function i am reading the data and converting them to float. Screenshot from 2023-02-08 12-16-58

is this correct, if possible please the dataloaders used in this papers

DrSleep commented 1 year ago

See here: https://github.com/DrSleep/multi-task-refinenet#more-to-come

In particular, https://github.com/DrSleep/DenseTorch/blob/master/examples/multitask/train.py#L20-L24

Dheeru66k commented 1 year ago

Hi, Thanks for the reply I have referred to the "DenseTorch" repo and tried running MTL model with depth ans Seg,its actually working and after 600 epocs the loss and MeanIoU are became stable with no further improvement as below.(Used your NYUD dataset) image

Issue 2: If i want to use Same encoder and Decoder used in Multitask model(mobilenet, Lightweigh Refinenet) for training individual heads, for example if i want to train only Depth, what exactly i need to change

Screenshot from 2023-02-11 13-14-20

and what parameters to give for depth, if i use Mobilenet and refinenent, instead of xception65, DLv3plus as below(showing side by side Multi(left) and Single tasks(right)) Screenshot from 2023-02-11 13-16-09

DrSleep commented 1 year ago

If you want to change the number of tasks, you need to adapt accordingly masks_names, criterions, loss_coeffs, num_classes and metrics. You can compare the configs between the single-task training example and the multi-task training example to confirm that.

For exact hyperparameters, it is better to refer to the original paper(s), iirc DenseTorch default values differ from those in the paper.