tensorflow / tensorflow

An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
Apache License 2.0
186.65k stars 74.35k forks source link

RandomStandardNormal not supported by TensorFlow Lite runtime #28791

Closed lisaong closed 5 years ago

lisaong commented 5 years ago

System information

Provide the text output from tflite_convert

--------------------------------------------------------------------------
ConverterError                           Traceback (most recent call last)
<ipython-input-36-c548bab089a8> in <module>
----> 1 tflite_model = converter.convert()

~\AppData\Local\Continuum\anaconda3\envs\diec\lib\site-packages\tensorflow\lite\python\lite.py in convert(self)
    454           input_tensors=self._input_tensors,
    455           output_tensors=self._output_tensors,
--> 456           **converter_kwargs)
    457     else:
    458       result = _toco_convert_graph_def(

~\AppData\Local\Continuum\anaconda3\envs\diec\lib\site-packages\tensorflow\lite\python\convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, *args, **kwargs)
    440   data = toco_convert_protos(model_flags.SerializeToString(),
    441                              toco_flags.SerializeToString(),
--> 442                              input_data.SerializeToString())
    443   return data
    444 

~\AppData\Local\Continuum\anaconda3\envs\diec\lib\site-packages\tensorflow\lite\python\convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str)
    203       stderr = _try_convert_to_unicode(stderr)
    204       raise ConverterError(
--> 205           "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
    206   finally:
    207     # Must manually cleanup files.

ConverterError: TOCO failed. See console for info.
2019-05-17 14:44:52.027648: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: RandomStandardNormal
2019-05-17 14:44:52.029852: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant
2019-05-17 14:44:52.030246: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-05-17 14:44:52.030690: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-05-17 14:44:52.031017: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
2019-05-17 14:44:52.031571: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: RandomStandardNormal
2019-05-17 14:44:52.034015: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 43 operators, 65 arrays (0 quantized)
2019-05-17 14:44:52.035043: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 43 operators, 65 arrays (0 quantized)
2019-05-17 14:44:52.040937: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 14 operators, 32 arrays (0 quantized)
2019-05-17 14:44:52.041523: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 14 operators, 32 arrays (0 quantized)
2019-05-17 14:44:52.042039: I tensorflow/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 256 bytes, theoretical optimal value: 256 bytes.
2019-05-17 14:44:52.044822: E tensorflow/lite/toco/toco_tooling.cc:421] We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, EXP, FULLY_CONNECTED, LOGISTIC, MUL. Here is a list of operators for which you will need custom implementations: RandomStandardNormal.
Traceback (most recent call last):
  File "C:\Users\issohl\AppData\Local\Continuum\anaconda3\envs\diec\Scripts\toco_from_protos-script.py", line 10, in <module>
    sys.exit(main())
  File "C:\Users\issohl\AppData\Local\Continuum\anaconda3\envs\diec\lib\site-packages\tensorflow\lite\toco\python\toco_from_protos.py", line 59, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "C:\Users\issohl\AppData\Local\Continuum\anaconda3\envs\diec\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
    _sys.exit(main(argv))
  File "C:\Users\issohl\AppData\Local\Continuum\anaconda3\envs\diec\lib\site-packages\tensorflow\lite\toco\python\toco_from_protos.py", line 33, in execute
    output_str = tensorflow_wrap_toco.TocoConvert(model_str, toco_str, input_str)
Exception: We are continually in the process of adding support to TensorFlow Lite for more ops. It would be helpful if you could inform us of how this conversion went by opening a github issue at https://github.com/tensorflow/tensorflow/issues/new?template=40-tflite-op-request.md
 and pasting the following:

Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If those are native TensorFlow operators, you might be able to use the extended runtime by passing --enable_select_tf_ops, or by setting target_ops=TFLITE_BUILTINS,SELECT_TF_OPS when calling tf.lite.TFLiteConverter(). Otherwise, if you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ADD, EXP, FULLY_CONNECTED, LOGISTIC, MUL. Here is a list of operators for which you will need custom implementations: RandomStandardNormal.

Also, please include a link to a GraphDef or the model if possible.

Basically this boils down to the usage of this on the model:

epsilon = K.random_normal(shape=(batch, dim))

# This model is based heavily on VAE example from Keras
# https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py

from keras.layers import Lambda, Input, Dense
from keras.models import Model
from keras.utils import plot_model
from keras import backend as K
from keras.losses import mse

class VAE:
    def __init__(self, original_dim, intermediate_dim, latent_dim):
        """Creates a variational autoencoder for continuous values
        """
        self.original_dim = original_dim
        input_shape = (original_dim,)

        # VAE model = encoder + decoder
        # build encoder model
        inputs = Input(shape=input_shape, name='encoder_input')
        x = Dense(intermediate_dim, activation='relu')(inputs)
        x = Dense(intermediate_dim, activation='relu')(x)
        self.z_mean = Dense(latent_dim, name='z_mean')(x)
        self.z_log_var = Dense(latent_dim, name='z_log_var')(x)

        # use reparameterization trick to push the sampling out as input
        # note that "output_shape" isn't necessary with the TensorFlow backend
        z = Lambda(VAE.sampling, output_shape=(latent_dim,), name='z')([self.z_mean, self.z_log_var])

        # instantiate encoder model
        self.encoder = Model(inputs, [self.z_mean, self.z_log_var, z], name='encoder')

        # build decoder model
        latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
        x = Dense(intermediate_dim, activation='relu')(latent_inputs)
        x = Dense(intermediate_dim, activation='relu')(x)
        outputs = Dense(original_dim, activation='sigmoid')(x)

        # instantiate decoder model
        self.decoder = Model(latent_inputs, outputs, name='decoder')

        # instantiate VAE model
        outputs = self.decoder(self.encoder(inputs)[2])
        self.vae = Model(inputs, outputs, name='vae_mlp')

    def describe(self):
        """Display model summaries and saves the architectures to PNG"""
        self.encoder.summary()
        plot_model(self.encoder, to_file='vae_mlp_encoder.png', show_shapes=True)
        self.decoder.summary()
        plot_model(self.decoder, to_file='vae_mlp_decoder.png', show_shapes=True)
        self.vae.summary()
        plot_model(self.vae, to_file='vae_mlp.png', show_shapes=True)

    def fit(self, X, optimizer='adam', **kwargs):
        """Fits the model"""

        def vae_loss_func(x, x_true):
            reconstruction_loss = mse(x, x_true)
            reconstruction_loss *= self.original_dim
            kl_loss = 1 + self.z_log_var - K.square(self.z_mean) - K.exp(self.z_log_var)
            kl_loss = K.sum(kl_loss, axis=-1)
            kl_loss *= -0.5
            return K.mean(reconstruction_loss + kl_loss)

        self.vae.compile(optimizer=optimizer, loss=vae_loss_func)
        return self.vae.fit(X, X, **kwargs)

    def evaluate(self, X, **kwargs):
        """Evaluate the model"""
        return self.vae.evaluate(x=X, y=X, **kwargs)

    # reparameterization trick
    # instead of sampling from Q(z|X), sample epsilon = N(0,I)
    # z = z_mean + sqrt(var) * epsilon
    def sampling(args):
        """Reparameterization trick by sampling from an isotropic unit Gaussian.
        # Arguments
            args (tensor): mean and log of variance of Q(z|X)
        # Returns
            z (tensor): sampled latent vector
        """
        z_mean, z_log_var = args
        batch = K.shape(z_mean)[0]
        dim = K.int_shape(z_mean)[1]

        # by default, random_normal has mean = 0 and std = 1.0
        epsilon = K.random_normal(shape=(batch, dim))
        return z_mean + K.exp(0.5 * z_log_var) * epsilon

Command used to reproduce:

See Jupyter Notebook: https://github.com/lisaong/diec/blob/tflite-mcu/day3/Anomaly_detection_VAE.ipynb

Any other info / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

muddham commented 5 years ago

@lisaong Please provide details about what platform you are using (operating system, architecture). Also include your TensorFlow version. Also, did you compile from source or install a binary?

Make sure you also include the exact command if possible to produce the output included in your test case. If you are unclear what to include see the issue template displayed in the Github new issue template.

We ask for this in the issue submission template, because it is really difficult to help without that information. Thanks!

lisaong commented 5 years ago

Hi @muddham, provided info as requested. Thank you.

muddham commented 5 years ago

@lisaong Request you to please refer the link1 , link2 . Please try if that helps and let us know how it progresses.

lisaong commented 5 years ago

@muddham, thanks for the pointers. One part is unclear to me after reading these links: https://www.tensorflow.org/lite/guide/ops_custom

In particular, how do I do this part?

“When initializing the OpResolver, add the custom op into the resolver, this will register the operator with Tensorflow Lite so that TensorFlow Lite can use the new implementation. “

Where does this code run?

tflite::ops::builtin::BuiltinOpResolver builtins; builtins.AddCustom("Sin", Register_SIN());

Is there an end-to-end example of registering a custom TFLite op?

hamlatzis commented 5 years ago

Like @lisaong said is (https://github.com/tensorflow/tensorflow/issues/28791#issuecomment-493622520) there a full sample for creating even the simplest custom operation in TensorFlow Lite?

Just to learn TensorFlow & TensorFlow Lite I created a model in TensorFlow with a simple custom operation (implemented my own addition, even if it exists on both libraries). I then created a model and after training it I converted it to .tflite

But now I am unable to convert the source code of my operation to Lite version so that I can use it

lisaong commented 5 years ago

Found the solution, at least for my issue: https://www.tensorflow.org/lite/guide/ops_select

RandomStandardNormal is part of the whitelist: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/toco/tflite/whitelisted_flex_ops.cc

# RandomStandardNormal is not part of TensorFlow lite, so we need to use SELECT_TF_OPS to include it

converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
                        tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()

This doesn't resolve the issue reported by @hamlatzis

jvishnuvardhan commented 5 years ago

I am closing this issue as it was resolved. Please open a new ticket if this issue persists. For new issues, please open a new ticket so that it will be helpful to the community to follow. Thanks!

lisaong commented 5 years ago

Hi @hamlatzis,

Turns out the SELECT_TF_OPS option is very unwieldy for target systems that are not iOS or Android. For my case I'm trying to compile the model for Raspberry Pi. The problem with SELECT_TF_OPS is that you have to compile (big) TensorFlow, and your mileage may vary on other platforms.

Here's an example of how the custom op can be registered. I adapted minimal.cc to register the op. Hopefully this will help.

Step 1: Implement your custom operator.


TfLiteStatus RandomStandardNormal_Prepare(TfLiteContext* context, TfLiteNode* node) {
  ...
  return kTfLiteOk;
}

TfLiteStatus RandomStandardNormal_Eval(TfLiteContext* context, TfLiteNode* node) {
  ...
  return kTfLiteOk;
}

TfLiteRegistration* Register_RandomStandardNormal() {
  static TfLiteRegistration r = {nullptr, nullptr, 
      RandomStandardNormal_Prepare, 
      RandomStandardNormal_Eval};
  return &r;
}

Step 2: Register it with the resolver.


  // Build the interpreter
  tflite::ops::builtin::BuiltinOpResolver resolver;

  // Register custom operators
  resolver.AddCustom("RandomStandardNormal", Register_RandomStandardNormal());

  InterpreterBuilder builder(*model, resolver);
  std::unique_ptr<Interpreter> interpreter;
  builder(&interpreter);
  TFLITE_MINIMAL_CHECK(interpreter != nullptr);

Source: https://github.com/lisaong/diec/tree/master/day3/inference

Regards, lisa