tensorflow / tensorflow

An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
Apache License 2.0
186.37k stars 74.31k forks source link

Provided LSTM example doesn't work. #35168

Closed mpariente closed 3 years ago

mpariente commented 4 years ago

@tensorflow/micro

System information

Describe the problem The example provided here doesn't work. Please provide the exact sequence of commands/steps when you ran into the problem If I put together the example, it gives this :

# Note this needs to happen before import tensorflow.
import os
os.environ['TF_ENABLE_CONTROL_FLOW_V2'] = '1'
import sys
from absl import app
import argparse
import tensorflow as tf

class MnistLstmModel(object):
  """Build a simple LSTM based MNIST model.

  Attributes:
    time_steps: The maximum length of the time_steps, but since we're just using
      the 'width' dimension as time_steps, it's actually a fixed number.
    input_size: The LSTM layer input size.
    num_lstm_layer: Number of LSTM layers for the stacked LSTM cell case.
    num_lstm_units: Number of units in the LSTM cell.
    units: The units for the last layer.
    num_class: Number of classes to predict.
  """

  def __init__(self, time_steps, input_size, num_lstm_layer, num_lstm_units,
               units, num_class):
    self.time_steps = time_steps
    self.input_size = input_size
    self.num_lstm_layer = num_lstm_layer
    self.num_lstm_units = num_lstm_units
    self.units = units
    self.num_class = num_class

  def build_model(self):
    """Build the model using the given configs.

    Returns:
      x: The input placehoder tensor.
      logits: The logits of the output.
      output_class: The prediction.
    """
    x = tf.placeholder(
        'float32', [None, self.time_steps, self.input_size], name='INPUT')
    lstm_layers = []
    for _ in range(self.num_lstm_layer):
      lstm_layers.append(
          # Important:
          #
          # Note here, we use `tf.lite.experimental.nn.TFLiteLSTMCell`
          # (OpHinted LSTMCell).
          tf.lite.experimental.nn.TFLiteLSTMCell(
              self.num_lstm_units, forget_bias=0))
    # Weights and biases for output softmax layer.
    out_weights = tf.Variable(tf.random.normal([self.units, self.num_class]))
    out_bias = tf.Variable(tf.zeros([self.num_class]))

    # Transpose input x to make it time major.
    lstm_inputs = tf.transpose(x, perm=[1, 0, 2])
    lstm_cells = tf.keras.layers.StackedRNNCells(lstm_layers)
    # Important:
    #
    # Note here, we use `tf.lite.experimental.nn.dynamic_rnn` and `time_major`
    # is set to True.
    outputs, _ = tf.lite.experimental.nn.dynamic_rnn(
        lstm_cells, lstm_inputs, dtype='float32', time_major=True)

    # Transpose the outputs back to [batch, time, output]
    outputs = tf.transpose(outputs, perm=[1, 0, 2])
    outputs = tf.unstack(outputs, axis=1)
    logits = tf.matmul(outputs[-1], out_weights) + out_bias
    output_class = tf.nn.softmax(logits, name='OUTPUT_CLASS')

    return x, logits, output_class

def train(model,
          model_dir,
          batch_size=20,
          learning_rate=0.001,
          train_steps=200,
          eval_steps=50,
          save_every_n_steps=100):
  """Train & save the MNIST recognition model."""
  # Train & test dataset.
  (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
  train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
  train_iterator = train_dataset.shuffle(
      buffer_size=1000).batch(batch_size).repeat().make_one_shot_iterator()
  x, logits, output_class = model.build_model()
  test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
  test_iterator = test_dataset.batch(
      batch_size).repeat().make_one_shot_iterator()
  # input label placeholder
  y = tf.placeholder(tf.int32, [
      None,
  ])
  one_hot_labels = tf.one_hot(y, depth=model.num_class)
  # Loss function
  loss = tf.reduce_mean(
      tf.nn.softmax_cross_entropy_with_logits(
          logits=logits, labels=one_hot_labels))
  correct = tf.nn.in_top_k(output_class, y, 1)
  accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
  # Optimization
  opt = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)

  # Initialize variables
  init = tf.global_variables_initializer()
  saver = tf.train.Saver()
  batch_x, batch_y = train_iterator.get_next()
  batch_test_x, batch_test_y = test_iterator.get_next()
  with tf.Session() as sess:
    sess.run([init])
    for i in range(train_steps):
      batch_x_value, batch_y_value = sess.run([batch_x, batch_y])
      _, loss_value = sess.run([opt, loss],
                               feed_dict={
                                   x: batch_x_value,
                                   y: batch_y_value
                               })
      if i % 100 == 0:
        tf.logging.info('Training step %d, loss is %f' % (i, loss_value))
      if i > 0 and i % save_every_n_steps == 0:
        accuracy_sum = 0.0
        for _ in range(eval_steps):
          test_x_value, test_y_value = sess.run([batch_test_x, batch_test_y])
          accuracy_value = sess.run(
              accuracy, feed_dict={
                  x: test_x_value,
                  y: test_y_value
              })
          accuracy_sum += accuracy_value
        tf.logging.info('Training step %d, accuracy is %f' %
                        (i, accuracy_sum / (eval_steps * 1.0)))
        saver.save(sess, model_dir)

def export(model, model_dir, tflite_model_file,
           use_post_training_quantize=True):
  """Export trained model to tflite model."""
  tf.reset_default_graph()
  x, _, output_class = model.build_model()
  saver = tf.train.Saver()
  sess = tf.Session()
  saver.restore(sess, model_dir)
  # Convert to Tflite model.
  converter = tf.lite.TFLiteConverter.from_session(sess, [x], [output_class])
  converter.post_training_quantize = use_post_training_quantize
  tflite = converter.convert()
  with open(tflite_model_file, 'w') as f:
    f.write(tflite)

def train_and_export(parsed_flags):
  """Train the MNIST LSTM model and export to TfLite."""
  model = MnistLstmModel(
      time_steps=28,
      input_size=28,
      num_lstm_layer=2,
      num_lstm_units=64,
      units=64,
      num_class=10)
  tf.logging.info('Starts training...')
  train(model, parsed_flags.model_dir)
  tf.logging.info('Finished training, starts exporting to tflite to %s ...' %
                  parsed_flags.tflite_model_file)
  export(model, parsed_flags.model_dir, parsed_flags.tflite_model_file,
         parsed_flags.use_post_training_quantize)
  tf.logging.info(
      'Finished exporting, model is %s' % parsed_flags.tflite_model_file)

def run_main(_):
  """Main in the TfLite LSTM tutorial."""
  parser = argparse.ArgumentParser(
      description=('Train a MNIST recognition model then export to TfLite.'))
  parser.add_argument(
      '--model_dir',
      type=str,
      help='Directory where the models will store.',
      required=True)
  parser.add_argument(
      '--tflite_model_file',
      type=str,
      help='Full filepath to the exported tflite model file.',
      required=True)
  parser.add_argument(
      '--use_post_training_quantize',
      action='store_true',
      default=True,
      help='Whether or not to use post_training_quantize.')
  parsed_flags, _ = parser.parse_known_args()
  train_and_export(parsed_flags)

def main():
  app.run(main=run_main, argv=sys.argv[:1])

if __name__ == '__main__':
  main()

When ran simply like python example.py --model_dir lstms/ --tflite_model_file lstms/model.tflite, I get the following error message :

INFO:tensorflow:Starts training...
I1216 23:55:41.555022 140529868711744 doc_example.py:159] Starts training...
INFO:tensorflow:Training step 0, loss is 2.657418
I1216 23:55:45.383375 140529868711744 doc_example.py:120] Training step 0, loss is 2.657418
INFO:tensorflow:Training step 100, loss is 0.867711
I1216 23:55:47.319205 140529868711744 doc_example.py:120] Training step 100, loss is 0.867711
INFO:tensorflow:Training step 100, accuracy is 0.540000
I1216 23:55:47.966933 140529868711744 doc_example.py:132] Training step 100, accuracy is 0.540000
INFO:tensorflow:Finished training, starts exporting to tflite to lstm_doc/model.tflite ...
I1216 23:55:50.603394 140529868711744 doc_example.py:162] Finished training, starts exporting to tflite to lstm_doc/model.tflite ...
INFO:tensorflow:Restoring parameters from lstm_doc/
I1216 23:55:50.832539 140529868711744 saver.py:1284] Restoring parameters from lstm_doc/
2019-12-16 23:55:50.880481: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2019-12-16 23:55:50.880555: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2019-12-16 23:55:50.900838: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:786] Optimization results for grappler item: graph_to_optimize
2019-12-16 23:55:50.900867: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:788]   function_optimizer: Graph size after: 412 nodes (0), 507 edges (0), time = 3.325ms.
2019-12-16 23:55:50.900872: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:788]   function_optimizer: Graph size after: 412 nodes (0), 507 edges (0), time = 5.794ms.
2019-12-16 23:55:50.900875: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:786] Optimization results for grappler item: hey_rnn_while_body_8743
2019-12-16 23:55:50.900878: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:788]   function_optimizer: function_optimizer did nothing. time = 0.002ms.
2019-12-16 23:55:50.900882: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:788]   function_optimizer: function_optimizer did nothing. time = 0ms.
2019-12-16 23:55:50.900884: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:786] Optimization results for grappler item: hey_rnn_while_cond_8742
2019-12-16 23:55:50.900888: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:788]   function_optimizer: function_optimizer did nothing. time = 0ms.
2019-12-16 23:55:50.900891: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:788]   function_optimizer: function_optimizer did nothing. time = 0ms.
INFO:tensorflow:Froze 26 variables.
I1216 23:55:50.942811 140529868711744 graph_util_impl.py:334] Froze 26 variables.
INFO:tensorflow:Converted 26 variables to const ops.
I1216 23:55:50.946878 140529868711744 graph_util_impl.py:394] Converted 26 variables to const ops.
/home/mparient/.virtualenvs/decibel/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py:846: UserWarning: Property post_training_quantize is deprecated, please use optimizations=[Optimize.DEFAULT] instead.
  " instead." % name)
---------------------------------------------------------------------------
ConverterError                            Traceback (most recent call last)
~/code_perso/decibel-light/prod/doc_example.py in <module>
    195 
    196 if __name__ == '__main__':
--> 197   main()

~/code_perso/decibel-light/prod/doc_example.py in main()
    191 
    192 def main():
--> 193   app.run(main=run_main, argv=sys.argv[:1])
    194 
    195 

~/.virtualenvs/decibel/lib/python3.6/site-packages/absl/app.py in run(main, argv, flags_parser)
    297       callback()
    298     try:
--> 299       _run_main(main, args)
    300     except UsageError as error:
    301       usage(shorthelp=True, detailed_error=error, exitcode=error.exitcode)

~/.virtualenvs/decibel/lib/python3.6/site-packages/absl/app.py in _run_main(main, argv)
    248     sys.exit(retval)
    249   else:
--> 250     sys.exit(main(argv))
    251 
    252 

~/code_perso/decibel-light/prod/doc_example.py in run_main(_)
    187       help='Whether or not to use post_training_quantize.')
    188   parsed_flags, _ = parser.parse_known_args()
--> 189   train_and_export(parsed_flags)
    190 
    191 

~/code_perso/decibel-light/prod/doc_example.py in train_and_export(parsed_flags)
    162                   parsed_flags.tflite_model_file)
    163   export(model, parsed_flags.model_dir, parsed_flags.tflite_model_file,
--> 164          parsed_flags.use_post_training_quantize)
    165   tf.logging.info(
    166       'Finished exporting, model is %s' % parsed_flags.tflite_model_file)

~/code_perso/decibel-light/prod/doc_example.py in export(model, model_dir, tflite_model_file, use_post_training_quantize)
    144   converter = tf.lite.TFLiteConverter.from_session(sess, [x], [output_class])
    145   converter.post_training_quantize = use_post_training_quantize
--> 146   tflite = converter.convert()
    147   with open(tflite_model_file, 'w') as f:
    148     f.write(tflite)

~/.virtualenvs/decibel/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py in convert(self)
    981           input_tensors=self._input_tensors,
    982           output_tensors=self._output_tensors,
--> 983           **converter_kwargs)
    984     else:
    985       result = _toco_convert_graph_def(

~/.virtualenvs/decibel/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, enable_mlir_converter, *args, **kwargs)
    447       input_data.SerializeToString(),
    448       debug_info_str=debug_info_str,
--> 449       enable_mlir_converter=enable_mlir_converter)
    450   return data
    451 

~/.virtualenvs/decibel/lib/python3.6/site-packages/tensorflow_core/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
    198       stdout = _try_convert_to_unicode(stdout)
    199       stderr = _try_convert_to_unicode(stderr)
--> 200       raise ConverterError("See console for info.\n%s\n%s\n" % (stdout, stderr))
    201   finally:
    202     # Must manually cleanup files.

ConverterError: See console for info.
2019-12-16 23:55:52.418001: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418032: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418038: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418043: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418047: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418051: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418056: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418061: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418066: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418071: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418076: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418081: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418146: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418153: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418158: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418162: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418166: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418171: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418175: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418179: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418184: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418188: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418193: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418199: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: ReadVariableOp
2019-12-16 23:55:52.418227: I tensorflow/lite/toco/import_tensorflow.cc:659] Converting unsupported operation: TensorListReserve
2019-12-16 23:55:52.418236: I tensorflow/lite/toco/import_tensorflow.cc:193] Unsupported data type in placeholder op: 21
2019-12-16 23:55:52.418479: F tensorflow/lite/toco/tooling_util.cc:1074] Check failed: name.substr(colon_pos + 1).find_first_not_of("0123456789") == string::npos (0 vs. 18446744073709551615)Array 'stacked_rnn_cells/InputHint-UnidirectionalSequenceLstm-34aa74ee205711ea831bad6e4148a879-12-None-input_bias/ReadVariableOp:value' has non-digit characters after colon.
Fatal Python error: Aborted

Current thread 0x00007f7ffc1c8740 (most recent call first):
  File "/home/mparient/.virtualenvs/decibel/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 52 in execute
  File "/home/mparient/.virtualenvs/decibel/lib/python3.6/site-packages/absl/app.py", line 250 in _run_main
  File "/home/mparient/.virtualenvs/decibel/lib/python3.6/site-packages/absl/app.py", line 299 in run
  File "/home/mparient/.virtualenvs/decibel/lib/python3.6/site-packages/tensorflow_core/python/platform/app.py", line 40 in run
  File "/home/mparient/.virtualenvs/decibel/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 89 in main
  File "/home/mparient/.virtualenvs/decibel/bin/toco_from_protos", line 8 in <module>
Aborted
renjie-liu commented 4 years ago

the old ophint-based method does not work well with resource variables (but it should work fine with 1.15, I have just tried in colab)

for new keras lstm/rnn, please refer here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/experimental_new_converter/keras_lstm.ipynb

thanks

mpariente commented 4 years ago

Thanks a lot for the example, I'm going to try that out.

the old ophint-based method does not work well with resource variables (but it should work fine with 1.15, I have just tried in colab)

Hmm, I did run this script with tf1.15.. Did you try the snippet I provided? Are you able to reproduce the error? And could I ask what you tried in colab?

for new keras lstm/rnn, please refer here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/experimental_new_converter/keras_lstm.ipynb

Thanks for that, where can I find some more info on the experimental_new_converter please? I'm struggling with post-training quantization with pretrained LSTMs for a while now..

renjie-liu commented 4 years ago

I used colab to import the python notebook and ran directly.

Can you try the new approach and see if the post-training works for you?

Thanks a lot!

On Thu, Dec 19, 2019 at 3:12 PM Pariente Manuel notifications@github.com wrote:

Thanks a lot for the example, I'm going to try that out.

the old ophint-based method does not work well with resource variables (but it should work fine with 1.15, I have just tried in colab) Hmm, I did run this script with tf1.15.. Did you try the snippet I provided? Are you able to reproduce the error? And could I ask what you tried in colab?

for new keras lstm/rnn, please refer here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/examples/experimental_new_converter/keras_lstm.ipynb

Thanks for that, where can I find some more info on the experimental_new_converter please? I'm struggling with post-training quantization with pretrained LSTMs for a while now..

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tensorflow/issues/35168?email_source=notifications&email_token=AIURNGKXI2OQXKXMIDIO6ETQZMNE7A5CNFSM4J3THI22YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHIVCVA#issuecomment-567365972, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIURNGKBUB4OWOMIYOZVIJ3QZMNE7ANCNFSM4J3THI2Q .

-- Renjie Liu

renjieliu@google.com +1 (650) 253-4359

mpariente commented 4 years ago

I used colab to import the python notebook and ran directly. Ok, that worked for me as well, I'll try to see why it failed with the script I made from the notebook..

Can you try the new approach and see if the post-training works for you?

It does not, sadly. If I add :

converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

it works but the whole model is still in float32 (I saved the model and checked it with netron)

If I add,

converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

It does not work. Here is the traceback :

2019-12-19 10:06:42.831816: E tensorflow/lite/tools/optimize/quantize_weights.cc:351] Quantize weights tool only supports tflite models with one subgraph.
Traceback (most recent call last):
  File "/home/mparient/.virtualenvs/tf_dev/bin/toco_from_protos", line 8, in <module>
    sys.exit(main())
  File "/home/mparient/.virtualenvs/tf_dev/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 93, in main
    app.run(main=execute, argv=[sys.argv[0]] + unparsed)
  File "/home/mparient/.virtualenvs/tf_dev/lib/python3.6/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/home/mparient/.virtualenvs/tf_dev/lib/python3.6/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/home/mparient/.virtualenvs/tf_dev/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "/home/mparient/.virtualenvs/tf_dev/lib/python3.6/site-packages/tensorflow_core/lite/toco/python/toco_from_protos.py", line 56, in execute
    enable_mlir_converter)
Exception: Quantize weights transformation failed.

Actually, sorry, I was not clear, I have been able to convert LSTM models to tflite in 1.15 using post-training quantization but some quantization nodes are left in the graph and the edgetpu_compiler seems not to like them.

renjie-liu commented 4 years ago

Hi Jian, can you help take a look about the lstm quantization? thanks

PCerles commented 4 years ago

Any update on this issue?

ashwinmurthy commented 4 years ago

@mpariente we announced Keras TF 2.0 LSTM to fused TFLite LSTM conversion support. Please see below: https://groups.google.com/a/tensorflow.org/g/tflite/c/Ub4apUvblN8

Do you want to try this? This is an e2e solution that works with the post training quantization for LSTM. Would love to see if this works for you.

PCerles commented 4 years ago

Going to throw in a plug here that I successfully dynamically quantized a Torch LSTM model and deployed it on a mobile device. Was also able to track internal state and do flow controlled updates to hidden states with a stateful Torch JIT'd object (impossible in TFLite). Full architecture 9 layer LSTM (quantized to 60 Mb) running at 2x real time on a mobile CPU that was streaming audio data for speech recognition.

tensorflowbutler commented 3 years ago

Hi There,

We are checking to see if you still need help on this, as you are using an older version of tensorflow which is officially considered end of life . We recommend that you upgrade to the latest 2.x version and let us know if the issue still persists in newer versions. Please open a new issue for any help you need against 2.x, and we will get you the right help.

This issue will be closed automatically 7 days from now. If you still need help with this issue, please provide us with more information.

google-ml-butler[bot] commented 3 years ago

Are you satisfied with the resolution of your issue? Yes No