anilsathyan7 / Portrait-Segmentation

Real-time portrait segmentation for mobile devices
MIT License
643 stars 133 forks source link

Deeplab training & inference normalization #15

Closed emepetres closed 4 years ago

emepetres commented 4 years ago

Hi, first of all great work!

I would like to try to retrain the _deeplabnchw.onnx model with more images to try to improve its accuracy. I understand that ´train.py´ generates the Model Type - 1 (billinear) but, how to train the one based on deeplab?

Apart, on inference, should the inputs of _deeplabnchw.onnx be normalized? If so, should I use the same parameters imgs - np.array([0.50693673, 0.47721124, 0.44640532])) /np.array([0.28926975, 0.27801928, 0.28596011]?

Thanks

anilsathyan7 commented 4 years ago

Hi, please refer the official deeplab repository on github for training. Also, you need to make few changes to train it for two class segmentation with custom dataset.

You can just divide the elements by 255.0 to normalize them to range 0...1 for training and inference. Once its trained use the given script in ipynb notebook to convert it to onnx channel first format

  1. https://github.com/anilsathyan7/Portrait-Segmentation#deeplab-quantization-aware-training-and-ml-accelerators (second paragraph)
  2. https://github.com/tensorflow/models/tree/master/research/deeplab
  3. https://github.com/anilsathyan7/Portrait-Segmentation/blob/master/deepstream_multiseg/onnx_nchw_conversion.ipynb
emepetres commented 4 years ago

Ok I understand, thank you very much @anilsathyan7 !