DrSleep / DenseTorch

An easy-to-use wrapper for work with dense per-pixel tasks in PyTorch (including multi-task learning)
MIT License
62 stars 8 forks source link

Training with my own dataset #6

Closed Marwen-Bhj closed 4 years ago

Marwen-Bhj commented 4 years ago

I have a dataset that contains masked and depth images. I would like to train this model on this dataset. As I understand, first, my masks and depth images should be 1 channel, then I need to modify the hyper-parameters within the config.py file accordingly to the paper. My concern here is the pre-trained model, in thetrain.py, line 12 the checkpoint is ckpt_postfix = 'mtrflw-nyudv2' which is the multi-task-refinenet trained on the nuydv2 dataset. Since my dataset is outdoor images, I need to feed-in the pre-trained model on the kitti dataset. If my understanding of the whole process is right, how can I do that and what other things should I be considering ? @DrSleep

DrSleep commented 4 years ago
  1. Segmentation masks and depth images must be read as 2-D arrays -- as per this line in the datareader.
  2. ckpt_postfix is simply the postfix in the checkpoint filename; check out this line
  3. pretrained only applies to the encoder; if it is set to True, then the encoder weights are initialised from the pre-trained network. You can see the exact datasets used for initialisation here