Closed DocDriven closed 4 years ago
After further testing, I think that the tflite model is not working as intended. Although it can be loaded and seems to produce somewhat plausible results, it seems to be a completely deterministic model. If it was working correctly, this would not be the case.
This leads me to believe that a non-implemented error for the random function goes unnoticed by the converter. Maybe this proves helpful for further investigations.
Perhaps using SELECT_TF_OPS
can help in this case.
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
See gist for your reference. Thanks!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.
Closing as stale. Please reopen if you'd like to work on this further.
System information
Describe the current behavior I am trying to convert an variational autoencoder into a tflite model. During the process, I stumbled across a weird bug. When explicitly specifying the shape of the sampling layer like this:
tf.random.normal(shape=(10,))
, the model is convertible without any errors.But in case of not hard-coding the shape into my model, e.g. by inferencing it from the input vector(s) like so:
I get the error that tf.random.normal is not supported by the TF Lite runtime:
Describe the expected behavior
It doesn't make sense to me that the possibly buggy inference of a shape influences whether an OP is supported or not. Also, I am not sure if the hard-coded model can be trusted or not.
Code to reproduce the issue Just set the EXPLICIT flag to False to get the error.
Can you confirm the strange behavior and whether tf.random.normal is implemented or not?