agentmorris / MegaDetector

MegaDetector is an AI model that helps conservation folks spend less time doing boring things with camera trap images.
MIT License
99 stars 23 forks source link

Prediction results from locally loaded SavedModel significantly differ from TF Serving model #30

Closed agentmorris closed 1 year ago

agentmorris commented 1 year ago

Hi,

I'm working with MegaDetector v3, 2019.05.30 and noticed that the prediction results for a given image significantly differ between the locally loaded SavedModel and the TFServing model -> where:

locally loaded SavedModel = saved_model_normalized_megadetector_v3_tf19.tar.gz TFServing model = saved_model_megadetector_v3_tf19.zip

The locally loaded quite underperforms vs. the TFServing model. Is this something you have seen before? Happy to share my code (am on TF2.1)

Thanks & best regards,

Mike


Issue cloned from Microsoft/CameraTraps, original issue posted by MBKraus on May 07, 2020.

agentmorris commented 1 year ago

Hi Mike, I haven't tried using the SavedModel in saved_model_normalized_megadetector_v3_tf19.tar.gz locally before - that powers our TensorFlow Hub demo I believe. I think the difference between the two is that the first takes input image in the common signature format, which has the pixel value normalized to the range [0, 1], while the second one takes input image with pixel values in the usual uint8 [0, 255] range. Might this contribute to the difference you're seeing?

Let us know how you plan to use the model - do you intend to use TF Serving or re-train with your own data?


(Comment originally posted by yangsiyu007)

agentmorris commented 1 year ago

Thank you! - that was exactly the issue! Completely missed out on the fact that the filename says 'normalized'! I / we will be using it to detect animals , crop them and feed them into a classifier that classifies animals from camera trap footage (for the purpose of an non-profit digital wildlife intelligence platform). For now we're going to use TF serving to do the trick, but we might want to retrain the model somewhere in the future (hence he local SavedModel format question).


(Comment originally posted by MBKraus)

agentmorris commented 1 year ago

Great, do drop us a line when you're done!

I think both of these formats can be used for TF Serving (just different expected input format). If you want to re-train, the last checkpoint files and frozen inference graph can both be used.

Here's a notebook from when I last tried to deploy the model using TFServing, although it looks like you've it sorted. Also, we just released MegaDetector version 4.1 trained with additional data.


(Comment originally posted by yangsiyu007)