anilsathyan7 / Portrait-Segmentation

Real-time portrait segmentation for mobile devices
MIT License
638 stars 133 forks source link

deconv_fin_munet.tflite #27

Closed san-guy closed 3 years ago

san-guy commented 3 years ago

Which network do I have to train to get deconv_fin_munet.tflite as the final file?

The train.py (and subsequent steps mentioned in README) gives bilinear_fin_munet.tflite as output. Do I need to do any other modification to this file to get deconv_fin_munet.tflite

anilsathyan7 commented 3 years ago

In the notebook portrait_segmentation.ipynb, change the convolution block from 'deconv_block_rez' to 'deconv_block' (i.e transpose convolution) in the decoder part of the model architecture, for training deconv model

san-guy commented 3 years ago

Thanks @anilsathyan7 . That was really a quick reply. I will make the changes and run the train again.

One quick clarification: Can I use models/transpose_seg/deconv_fin_munet.h5 for pretraining here with the above modifications you just mentioned above?

anilsathyan7 commented 3 years ago

This exported model contains additional faltten layers, so you wont be able to use it as a pretrained model dirtectly. May be could load weights from corresponding layers after defining the model architecture or try removing the last layer from the model. Usually, we use the '.hdf5' checkpoint for resuming the training process(i.e pretrained models).

san-guy commented 3 years ago

I could load the pretrained model from here: checkpoints/deconv_model-260-0.06.hdf5 And could also produce the similar tflite.

Thank you again.