nixingyang / AdaptiveL2Regularization

[ICPR 2020] Adaptive L2 Regularization in Person Re-Identification
https://ieeexplore.ieee.org/document/9412481
MIT License
64 stars 23 forks source link

Inference model #3

Closed mdne closed 4 years ago

mdne commented 4 years ago

Hi, I'd trained the model on my own dataset and got training_model.h5 file. Then I've tried to load model using keras functional like that: model = tf.keras.models.load_model('training_model.h5', custom_objects={"AdaptiveL1L2": AdaptiveL1L2})

and got an error:

ValueError: Layer #0 (named "resnet" in the current model) was found to correspond to layer resnet in the save file. However the new layer resnet expects 6 weights, but the saved weights have 10 elements.

So I have several questions:

  1. How to load the model for inference?
  2. How to make an inference on one or more images without custom datagenerator?
  3. What is the size of embeddings?

Thanks

nixingyang commented 4 years ago

@mdne Hi, The native load_model function would yield such error due to the reason that the l2_regularization_factor tensors are not initiated. This issue could be addressed by calling test_on_batch before load_weights as shown here. Returning to your questions: 1, Define the model using init_model. Call test_on_batch, followed by load_weights. The aforementioned steps are the same as the evaluation procedure. 2, A minimal snippet would be inference_model.predict_on_batch(...). A more detailed example can be found in the extract_features function. Please note that training_model differs from inference_model. 3, The size of embeddings is 4096. Based on my experiences from other projects, you may shrink it to 256 or 128 using PCA, while causing reasonable performance degradation. Happy experimenting :-) All the best. Xingyang

mdne commented 4 years ago

Thank you for the explanation. Am I right that your method assumes definition and loading the training model and makes an inference inside by calling callback? Is there a way to load the inference model directly from file(training_model.h5) and use it? Or maybe saving a model after training as an inference model?

nixingyang commented 4 years ago

1, Yes, that is how my implementation works. After initiating the models (training_model and inference_model) using init_model, test_on_batch and load_weights successively, you may directly compute the embeddings via inference_model.predict_on_batch(...). It is not mandatory to incorporate the evaluation pipeline as a callback. 2, It is not straightforward to load the inference model directly from the weights file, due to the same reason explained in my previous post. You may write a stand-alone script for inference if that is what you need.

stale[bot] commented 4 years ago

Closing as stale. Please reopen if you'd like to work on this further.