Open seboz123 opened 2 years ago
@seboz123,
Generally if the model was trained on RGB Images, then we have to modify certain code to support gray scale images.
Gray scale images have only data in 1 channel. To perform object detection we need 3 channels (the inference code was written for 3 channels). You can duplicate the single channel data into two more channels. For more details please refer here. Thank you.
Hey @chunduriv ,
thanks for the quick response. I was just wondering how the object detection api framework handles those images. Because I can save them as grayscale png 1-Channel Images, but they are loaded correctly by the API, without further modifications.
@seboz123,
but they are loaded correctly by the API, without further modifications.
Did you evaluate the model?
Could you share the complete code/reference link that you have trained? Thank you.
Hey,
yes I did evaluate it. There is unfortuneately not a very good way to share the code. Essentially I am using https://github.com/tensorflow/models/blob/master/research/object_detection/model_main_tf2.py
to run the training. As configuration I use https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/ssd_mobilenet_v2_pets_keras.config
. I can share an image, if that helps
Hi,
I trained a MobileNet model on grayscale Images. I was wondering, what happens under the hood as MobileNet only works with 3 Channel images? Does the object detection API just repeat the 1-Channel Image 3 times? Thanks in advance.