bumble-tech / private-detector

Bumble's Private Detector - a pretrained model for detecting lewd images
https://medium.com/bumble-tech/bumble-inc-open-sources-private-detector-and-makes-another-step-towards-a-safer-internet-for-women-8e6cdb111d81
Apache License 2.0
1.31k stars 97 forks source link

onnx model,please #8

Closed maophp closed 1 year ago

maophp commented 1 year ago

I am writing to seek assistance with converting the model into the ONNX format. I have encountered some unresolved issues during the conversion process, and I am hoping to receive your guidance in order to successfully convert the model to the ONNX format.

Steeeephen commented 1 year ago

Hey @maophp, thanks for raising the issue.

I can't help solve the errors you're running into without knowing what the errors are. Could you add more information?

maophp commented 1 year ago

my code is: import tensorflow as tf import tf2onnx

tf_model = tf.keras.models.load_model('./saved_model')

input_shape = (480, 480, 3) image = tf.reshape(input_shape, -1) tf_model.build([input_shape]) onnxmodel, = tf2onnx.convert.from_keras(tf_model) onnx_file_path = './model.onnx' tf2onnx.save_model(onnx_file_path, onnx_model)

and get mssage: ValueError: Exception encountered when calling layer 'inference_model' (type InferenceModel).

Cannot call custom layer inference_model of type <class 'keras.saving.legacy.saved_model.load.InferenceModel'>, because the call function was not serialized to the SavedModel.Please try one of the following methods to fix this issue:

(1) Implement `get_config` and `from_config` in the layer/model class, and pass the object to the `custom_objects` argument when loading the model. For more details, see: https://www.tensorflow.org/guide/keras/save_and_serialize

(2) Ensure that the subclassed model or layer overwrites `call` and not `__call__`. The input shape and dtype will be automatically recorded when the object is called, and used when saving. To manually specify the input shape/dtype, decorate the call function with `@tf.function(input_signature=...)`.

Call arguments received by layer 'inference_model' (type InferenceModel):
  • unused_args=('tf.Tensor(shape=(32,), dtype=float32)',)
  • unused_kwargs={'training': 'False'}
maophp commented 1 year ago

I solved ...

Steeeephen commented 1 year ago

Ahh nice one! Could you add your solution below for anyone else who runs into this issue in the future?

bonlime commented 1 year ago

I was able to successfully convert model to onnx with one line of code, without any issues using this: https://github.com/onnx/tensorflow-onnx