Closed nbansal90 closed 3 years ago
Yes, that's the right order.
The pretraining of the SDE network is an optional step. By default, our pretrained SDE network is automatically downloaded in the following steps. If you want to use your own pretraining, please make sure to adjust the paths for the following experiments. The relevant file is https://github.com/lhoyer/improving_segmentation_with_selfsupervised_depth/blob/master/models/utils.py#L108.
Also, the unsupervised data selection for annotation is optional and by default, our results are used by the other experiments. To use labels selected by the unsupervised data selection reproduced by running experiment 211, please copy the content of nlabelsXXX_subset.json from the log directory to loader/preselected_labels.py.
Experiment 210 is only relevant for you if you want to ablate multi-task learning.
I hope that clarifies the process.
Hey @lhoyer ! Thank you for your prompt reply. That definitely clarified few confusion. But just to clarify again.
python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec5_crop.yml
python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec6_crop.yml
as same as running: python run_experiments.py --machine ws --exp 210
It would be really helpful if you would mention the commands which I should run(in the correct order) to get the model running, and achieved the results as Table 1 in the paper , which gives a result (mIou) 68.01%.
python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec5_crop.yml
python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec6_crop.yml
or only
python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec6_crop.yml
I actually ran both command in my case I find that there are two models generated at the end of training, in the results
folder which are cityscapes-monodepth-101aspp-dec5-crop
and cityscapes-monodepth-101aspp-dec6-crop
namely. But I also find there were two models downloaded(most probably from google drive as pointed out by you) https://github.com/lhoyer/improving_segmentation_with_selfsupervised_depth/blob/master/models/utils.py#L108. in the folder models
by name mono_cityscapes_1024x512_r101dil_aspp_dec5_posepretrain_crop512x512bs4
and mono_cityscapes_1024x512_r101dil_aspp_dec6_lr5_fd2_crop512x512bs2
. So I am suppose to be using the model in the result
folder or one in model
folder.
I think Just a step wise execution order will clarify things for me. I think just going through the paper and the steps currently mentioned in readme has confused me.
The easiest way to reproduce the results in Table 1 is running only
python run_experiments.py --machine ws --exp 212
This will download our pretrained self-supervised depth estimation model and the labels selected by our execution of the automatic data selection for annotation.
In case you want to do a full reproduction of all steps of the framework, here are the step-by-step instructions. Please note that you can skip the training steps that you are not interested in. In that case, the subsequent steps will fall back to our previous checkpoints. For example, if you don't want to train your own self-supervised depth estimation model, you can start with step 5.
model
folder.python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec5_crop.yml
python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec6_crop.yml
python run_experiments.py --machine ws --exp 211
python run_experiments.py --machine ws --exp 212
I hope that clarifies the confusion.
PS: I have updated experiment 211 (label selection) in experiments.py that it does not run ablations by default. I would recommend that you pull these changes to avoid unnecessary trainings.
The easiest way to reproduce the results in Table 1 is running only
python run_experiments.py --machine ws --exp 212
This will download our pretrained self-supervised depth estimation model and the labels selected by our execution of the automatic data selection for annotation.
In case you want to do a full reproduction of all steps of the framework, here are the step-by-step instructions. Please note that you can skip the training steps that you are not interested in. In that case, the subsequent steps will fall back to our previous checkpoints. For example, if you don't want to train your own self-supervised depth estimation model, you can start with step 5.
- If they already exist, delete the downloaded models in the
model
folder.- You train self-supervised depth estimation with a frozen encoder initialized from ImageNet
python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec5_crop.yml
- You upload the result folder to google drive and adapt https://github.com/lhoyer/improving_segmentation_with_selfsupervised_depth/blob/master/models/utils.py#L108 that "mono_cityscapes_1024x512_r101dil_aspp_dec5_posepretrain_crop512x512bs4" points to your own model on google drive.
- You continue the self-supervised depth estimation training with an unfrozen encoder and ImageNet feature distance
python train.py --machine ws --config configs/cityscapes_monodepth_highres_dec6_crop.yml
- You repeat step 2 with "mono_cityscapes_1024x512_r101dil_aspp_dec6_lr5_fd2_crop512x512bs4".
- You execute the automatic data selection for annotation
python run_experiments.py --machine ws --exp 211
- You copy the content of nlabelsXXX_subset.json from the log directory to loader/preselected_labels.py SELECTED_LABELS["cityscapes"]["ds_us"].
- You run the segmentation training
python run_experiments.py --machine ws --exp 212
I hope that clarifies the confusion.
Hey @lhoyer ! Thank you for an exhaustive reply! I think I have got the idea now. Just one last thing .. in the steps which you have mentioned
Step 5 : You repeat step 2 with "mono_cityscapes_1024x512_r101dil_aspp_dec6_lr5_fd2_crop512x512bs4".
Did you mean to say step 3
and not step 2
?
Hey @lhoyer , Thanks for providing the code for a great work!
Lukas, I am looking to replicate the results on my ending by simply repeating the sequence in which experiments should be running. I am getting bit confused here. As I understand it, (please point out if I am wrong), these are steps to be followed, in the following chronical fashion.
Here Do I have to run one of the command or both the command ? I see there is difference between the total iterations in these files and also in one of the config depth_pretrained is set
None
while it is not in another file ?Now run the unsupervised data selection for annotation configuration. Command is:
Run the complete multi-task (SDE and Semantic Segmentation) Network: Command is:
If I am correct on all account in the above mentioned steps, What exactly is the need of
EXP ID: 210
. I have completed the whole set up for the repository but I am still confused about what exactly is the order in which the steps has to be followed.Thank you.