onnx / onnx-tensorflow

Tensorflow Backend for ONNX
Other
1.29k stars 296 forks source link

LSTM support? #56

Closed alexanderkoller closed 6 years ago

alexanderkoller commented 6 years ago

Hi,

I'm trying to convert a very simple LSTM from Pytorch to Tensorflow via ONNX, but I'm getting an error in the onnx-tensorflow prepare function.

Are LSTMs supported by onnx-tensorflow? If no, why not, and how would I go about adding them?

Best, Alexander.

Error message:

Traceback (most recent call last):
  File "convert_onnx_tf.py", line 19, in <module>
    tf_rep = prepare(model)
  File "/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/onnx_tf/backend.py", line 385, in prepare
    super(TensorflowBackend, cls).prepare(model, device, **kwargs)
  File "/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/onnx/backend/base.py", line 53, in prepare
    onnx.checker.check_model(model)
  File "/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/onnx/checker.py", line 32, in checker
    proto.SerializeToString(), ir_version)
onnx.onnx_cpp2py_export.checker.ValidationError: Output size 3 not in range [min=1, max=2].

==> Context: Bad node spec: input: "0" input: "11" input: "15" input: "24" input: "" input: "1" input: "2" output: "25" output: "26" output: "27" op_type: "LSTM" attribute { name: "hidden_size" i: 3 type: INT } doc_string: "/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/onnx/__init__.py(408): wrapper\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/_functions/rnn.py(315): forward\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/rnn.py(181): forward\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py(345): _slow_forward\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py(355): __call__\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/jit/__init__.py(284): forward\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/nn/modules/module.py(357): __call__\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/jit/__init__.py(251): trace\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/onnx/__init__.py(132): _export\n/Users/koller/anaconda/envs/py35/lib/python3.5/site-packages/torch/onnx/__init__.py(83): export\n/Users/koller/Documents/workspace/onnx_to_tensorflow/lstm.py(36): <module>\n"

Output produced by torch.onnx.export(lstm, (inputs,hidden), "lstm.onnx", verbose=True):

graph(%0 : Float(5, 1, 3)
      %1 : Float(1, 1, 3)
      %2 : Float(1, 1, 3)
      %3 : Float(12, 3)
      %4 : Float(12, 3)
      %5 : Float(12)
      %6 : Float(12)) {
  %7 : UNKNOWN_TYPE = Undefined(), scope: LSTM
  %8 : UNKNOWN_TYPE = Slice[axes=[0], ends=[3], starts=[0]](%3), scope: LSTM
  %9 : UNKNOWN_TYPE = Slice[axes=[0], ends=[12], starts=[9]](%3), scope: LSTM
  %10 : UNKNOWN_TYPE = Slice[axes=[0], ends=[9], starts=[3]](%3), scope: LSTM
  %11 : UNKNOWN_TYPE = Concat[axis=0](%8, %9, %10), scope: LSTM
  %12 : UNKNOWN_TYPE = Slice[axes=[0], ends=[3], starts=[0]](%4), scope: LSTM
  %13 : UNKNOWN_TYPE = Slice[axes=[0], ends=[12], starts=[9]](%4), scope: LSTM
  %14 : UNKNOWN_TYPE = Slice[axes=[0], ends=[9], starts=[3]](%4), scope: LSTM
  %15 : UNKNOWN_TYPE = Concat[axis=0](%12, %13, %14), scope: LSTM
  %16 : UNKNOWN_TYPE = Slice[axes=[0], ends=[3], starts=[0]](%5), scope: LSTM
  %17 : UNKNOWN_TYPE = Slice[axes=[0], ends=[12], starts=[9]](%5), scope: LSTM
  %18 : UNKNOWN_TYPE = Slice[axes=[0], ends=[9], starts=[3]](%5), scope: LSTM
  %19 : UNKNOWN_TYPE = Concat[axis=0](%16, %17, %18), scope: LSTM
  %20 : UNKNOWN_TYPE = Slice[axes=[0], ends=[3], starts=[0]](%6), scope: LSTM
  %21 : UNKNOWN_TYPE = Slice[axes=[0], ends=[12], starts=[9]](%6), scope: LSTM
  %22 : UNKNOWN_TYPE = Slice[axes=[0], ends=[9], starts=[3]](%6), scope: LSTM
  %23 : UNKNOWN_TYPE = Concat[axis=0](%20, %21, %22), scope: LSTM
  %24 : UNKNOWN_TYPE = Concat[axis=0](%19, %23), scope: LSTM
  %25 : Float(5, 1, 3), %26 : Float(1, 1, 3), %27 : Float(1, 1, 3) = LSTM[hidden_size=3](%0, %11, %15, %24, %7, %1, %2), scope: LSTM
  return (%25, %26, %27);
}

The LSTM itself is the first example from http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html

tjingrant commented 6 years ago

We used to have LSTM support but the ONNX spec regarding LSTM kept changing and it eventually went outdated so I deleted them. I hope you understand that It's not a minor effort keeping updated with the spec. Hopefully the spec is more stabilized now.

You can take a look at an older version of backend.py here which has support for the older spec and improve from there.

Please let me know if you have any further questions and any contribution is very welcome.

alexanderkoller commented 6 years ago

Thank you for your answer! Yes, I do appreciate that it is tricky to keep up with a changing spec. Let me see what I can do with the current version.

alexanderkoller commented 6 years ago

Turns out that backend.py doesn't even get to _onnx_node_to_tensorflow_op, which as far as I can tell is where handle_optimized_r_n_n would be called.

Am I interpreting the exception correctly that the ValidationError is thrown in onnx.checker.check_model, and that what we're seeing here is actually a mismatch between the ONNX that Pytorch generates (current source version from Github) and what ONNX (1.0.1) expects - and not actually a problem with onnx-tensorflow yet?

fumihwh commented 6 years ago

@alexanderkoller I can’t reproduce your error locally. But I built them from source both. So try using onnx built from source first.

alexanderkoller commented 6 years ago

@fumihwh I built Pytorch and onnx-tensorflow from source, but used ONNX 1.0.1, which came from conda install -c conda-forge onnx. I will try to build ONNX from source (had some trouble with that last night) and report back.

In the meantime, here is the ONNX file that Pytorch exported: lstm.onnx, in case that helps. The error message is generated when running the following onnx-tensorflow program:

import onnx
from onnx_tf.backend import prepare

model = onnx.load("lstm.onnx")
tf_rep = prepare(model)
fumihwh commented 6 years ago

@alexanderkoller I used your lstm.onnx and got NotImplementedError: LSTM op is not implemented.. So, it means I have already passed onnx.checker.check_model step.

tjingrant commented 6 years ago

This error makes more sense.

fumihwh commented 6 years ago

@tjingrant Yes. LSTM is one of ONNX op not implemented here currently.

alexanderkoller commented 6 years ago

@fumihwh But you get the "LSTM op is not implemented" error with the development version of ONNX, right? Still working on building it; now that I've fixed the protobuf version mismatch, I'm stuck at the missing pybind11.h. I will keep trying, and report back.

tjingrant commented 6 years ago

@alexanderkoller you may be missing the --recursive flag when cloning.

alexanderkoller commented 6 years ago

Okay, I have managed to compile ONNX from source and am now getting the same error as you guys.

I have also implemented a handle_l_s_t_m method in TensorflowBackend, which creates the LSTM in Tensorflow (so at least it can be run in a TF session without crashing), and figured out how to access both the weights in the ONNX LSTM operator and the variables inside the Tensorflow LSTMCell.

Now I'm facing the challenge of injecting the weights from ONNX into the weight variables in the Tensorflow LSTMCell. Do you have any advice on how to do this? The old code that @tjingrant linked to doesn't seem to do it. To the extent that I understand Tensorflow, the clean way would be to initialize the variable with tf.assign. Do you have a mechanism for doing this that I can use?

fumihwh commented 6 years ago

59

I created a pr to add lstm handler. It works but without some features. Any advice is welcome.

@alexanderkoller You can try above one.

alexanderkoller commented 6 years ago

@fumihwh Great, thanks! This is roughly as far as I got too.

How does your code set the weight tensors of the LSTM to be as in the ONNX? I can't see how it does it, but maybe I'm missing something?

fumihwh commented 6 years ago

@alexanderkoller onnx_graph_to_tensorflow_net in backend.py https://github.com/onnx/onnx-tensorflow/blob/master/onnx_tf/backend.py#L343-L353 Initialize tensors here.

alexanderkoller commented 6 years ago

@fumihwh Thanks!

So if I understand your code correctly, the input_dict_items contain a mapping from placeholders or variables to tensors. When run in TensorflowRep is called, these will be fed into the computation graph through the feed_dict.

What I don't understand yet is what your preferred strategy is for adding entries to the input_dict_items. If I simply try to update the input_dict in handle_l_s_t_m, that value gets lost because onnx_graph_to_tensorflow_net returns the original_input_dict, before the handle* methods had a chance to add their own entries. On the other hand, I can't initialize the LSTM variables in lines 343-353 (to which you pointed me), because the Tensorflow LSTM node is only generated in handle_l_s_t_m, and so its variables don't exist yet at this point in the program.

Could you clarify?

fumihwh commented 6 years ago

@alexanderkoller First, original_input_dict from onnx_graph_to_tensorflow_net contains initialized inputs. So, LSTM variables are initialized and passed to handle_l_s_t_m. Please check logic again.

The problem should be how pass initialized params(W,R,B) to tf.contrib.rnn.LSTMCell. Because there are no attrs for them in tf. Maybe you can try assign them after make cell. https://github.com/tensorflow/tensorflow/issues/3115 https://stackoverflow.com/questions/40318812/tensorflow-rnn-weight-matrices-initialization

stgrue commented 6 years ago

Hi,

I've picked up where @alexanderkoller left off, and I have found a way to transfer the weights from the ONNX model into TensorFlow. The way I'm doing it is by reading the weights by using a dummy tf.Session, and then creating a custom LSTM cell by using constant initializers.

I have submitted a Pull Reqeuest for your consideration. Right now, it can only handle vanilla unidirectional LSTMs, but I think it should be very easy to extend this method to bidirectional and peephole LSTMs. Please let me know what you think!

gauravbansal98 commented 6 years ago

@stgrue Can you please explain it a bit more or share code if you can. I am currently trying to do : - f = [np.random.normal(size = [15, 40]), np.random.normal(size = [40,])] init = tf.constant_initializer(f, verify_shape = True, dtype = tf.float32)

cell = tf.contrib.rnn.LSTMCell(lstm_units, initializer = init) unused_encoder_outputs, encoder_state =tf.nn.dynamic_rnn(cell ,source_seq_embedded,sequence_length=source_seq_len,dtype = tf.float32)

but it is giving the following error


TypeError Traceback (most recent call last)

in () 32 #initial_state = np.random.normal(size = [2, 10]) 33 unused_encoder_outputs, encoder_state = tf.nn.dynamic_rnn(cell1 ,source_seq_embedded,sequence_length=source_seq_len, ---> 34 dtype = tf.float32) 35 36 # Decoder: /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn.py in dynamic_rnn(cell, inputs, sequence_length, initial_state, dtype, parallel_iterations, swap_memory, time_major, scope) 625 swap_memory=swap_memory, 626 sequence_length=sequence_length, --> 627 dtype=dtype) 628 629 # Outputs of _dynamic_rnn_loop are always shaped [time, batch, depth]. /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn.py in _dynamic_rnn_loop(cell, inputs, initial_state, parallel_iterations, swap_memory, sequence_length, dtype) 822 parallel_iterations=parallel_iterations, 823 maximum_iterations=time_steps, --> 824 swap_memory=swap_memory) 825 826 # Unpack final output if not using output tuples. /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/control_flow_ops.py in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name, maximum_iterations) 3222 if loop_context.outer_context is None: 3223 ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, loop_context) -> 3224 result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants) 3225 if maximum_iterations is not None: 3226 return result[1] /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/control_flow_ops.py in BuildLoop(self, pred, body, loop_vars, shape_invariants) 2954 with ops.get_default_graph()._lock: # pylint: disable=protected-access 2955 original_body_result, exit_vars = self._BuildLoop( -> 2956 pred, body, original_loop_vars, loop_vars, shape_invariants) 2957 finally: 2958 self.Exit() /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/control_flow_ops.py in _BuildLoop(self, pred, body, original_loop_vars, loop_vars, shape_invariants) 2891 flat_sequence=vars_for_body_with_tensor_arrays) 2892 pre_summaries = ops.get_collection(ops.GraphKeys._SUMMARY_COLLECTION) # pylint: disable=protected-access -> 2893 body_result = body(*packed_vars_for_body) 2894 post_summaries = ops.get_collection(ops.GraphKeys._SUMMARY_COLLECTION) # pylint: disable=protected-access 2895 if not nest.is_sequence(body_result): /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/control_flow_ops.py in (i, lv) 3192 cond = lambda i, lv: ( # pylint: disable=g-long-lambda 3193 math_ops.logical_and(i < maximum_iterations, orig_cond(*lv))) -> 3194 body = lambda i, lv: (i + 1, orig_body(*lv)) 3195 3196 if context.executing_eagerly(): /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn.py in _time_step(time, output_ta_t, state) 791 call_cell=call_cell, 792 state_size=state_size, --> 793 skip_conditionals=True) 794 else: 795 (output, new_state) = call_cell() /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn.py in _rnn_step(time, sequence_length, min_sequence_length, max_sequence_length, zero_output, state, call_cell, state_size, skip_conditionals) 246 # steps. This is faster when max_seq_len is equal to the number of unrolls 247 # (which is typical for dynamic_rnn). --> 248 new_output, new_state = call_cell() 249 nest.assert_same_structure(state, new_state) 250 new_state = nest.flatten(new_state) /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn.py in () 779 780 input_t = nest.pack_sequence_as(structure=inputs, flat_sequence=input_t) --> 781 call_cell = lambda: cell(input_t, state) 782 783 if sequence_length is not None: /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn_cell_impl.py in __call__(self, inputs, state, scope, *args, **kwargs) 337 # method. See the class docstring for more details. 338 return base_layer.Layer.__call__(self, inputs, state, scope=scope, --> 339 *args, **kwargs) 340 341 /usr/local/lib/python3.6/dist-packages/tensorflow/python/layers/base.py in __call__(self, inputs, *args, **kwargs) 697 if all(hasattr(x, 'get_shape') for x in input_list): 698 input_shapes = nest.map_structure(lambda x: x.get_shape(), inputs) --> 699 self.build(input_shapes) 700 try: 701 # Note: not all sub-classes of Layer call Layer.__init__ (especially /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/rnn_cell_impl.py in build(self, inputs_shape) 765 shape=[input_depth + h_depth, 4 * self._num_units], 766 initializer=self._initializer, --> 767 partitioner=maybe_partitioner) 768 self._bias = self.add_variable( 769 _BIAS_VARIABLE_NAME, /usr/local/lib/python3.6/dist-packages/tensorflow/python/layers/base.py in add_variable(self, name, shape, dtype, initializer, regularizer, trainable, constraint, partitioner) 544 constraint=constraint, 545 trainable=trainable and self.trainable, --> 546 partitioner=partitioner) 547 548 if init_graph is not None: # pylint: disable=protected-access /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/checkpointable.py in _add_variable_with_custom_getter(self, name, shape, dtype, initializer, getter, overwrite, **kwargs_for_getter) 434 new_variable = getter( 435 name=name, shape=shape, dtype=dtype, initializer=initializer, --> 436 **kwargs_for_getter) 437 438 # If we set an initializer and the variable processed it, tracking will not /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py in get_variable(name, shape, dtype, initializer, regularizer, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter, constraint) 1315 partitioner=partitioner, validate_shape=validate_shape, 1316 use_resource=use_resource, custom_getter=custom_getter, -> 1317 constraint=constraint) 1318 get_variable_or_local_docstring = ( 1319 """%s /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, var_store, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter, constraint) 1077 partitioner=partitioner, validate_shape=validate_shape, 1078 use_resource=use_resource, custom_getter=custom_getter, -> 1079 constraint=constraint) 1080 1081 def _get_partitioned_variable(self, /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py in get_variable(self, name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, custom_getter, constraint) 423 caching_device=caching_device, partitioner=partitioner, 424 validate_shape=validate_shape, use_resource=use_resource, --> 425 constraint=constraint) 426 427 def _get_partitioned_variable( /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py in _true_getter(name, shape, dtype, initializer, regularizer, reuse, trainable, collections, caching_device, partitioner, validate_shape, use_resource, constraint) 392 trainable=trainable, collections=collections, 393 caching_device=caching_device, validate_shape=validate_shape, --> 394 use_resource=use_resource, constraint=constraint) 395 396 if custom_getter is not None: /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py in _get_single_variable(self, name, shape, dtype, initializer, regularizer, partition_info, reuse, trainable, collections, caching_device, validate_shape, use_resource, constraint) 784 validate_shape=validate_shape, 785 constraint=constraint, --> 786 use_resource=use_resource) 787 if not context.executing_eagerly() or self._store_eager_variables: 788 # In eager mode we do not want to keep default references to Variable /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py in variable(initial_value, trainable, collections, validate_shape, caching_device, name, dtype, constraint, use_resource) 2218 name=name, dtype=dtype, 2219 constraint=constraint, -> 2220 use_resource=use_resource) 2221 2222 /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py in (**kwargs) 2208 constraint=None, 2209 use_resource=None): -> 2210 previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs) 2211 for getter in ops.get_default_graph()._variable_creator_stack: # pylint: disable=protected-access 2212 previous_getter = _make_getter(getter, previous_getter) /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py in default_variable_creator(next_creator, **kwargs) 2191 collections=collections, validate_shape=validate_shape, 2192 caching_device=caching_device, name=name, dtype=dtype, -> 2193 constraint=constraint) 2194 2195 /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py in __init__(self, initial_value, trainable, collections, validate_shape, caching_device, name, variable_def, dtype, expected_shape, import_scope, constraint) 233 dtype=dtype, 234 expected_shape=expected_shape, --> 235 constraint=constraint) 236 237 def __repr__(self): /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py in _init_from_args(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, expected_shape, constraint) 341 with ops.name_scope("Initializer"), ops.device(None): 342 self._initial_value = ops.convert_to_tensor( --> 343 initial_value(), name="initial_value", dtype=dtype) 344 shape = (self._initial_value.get_shape() 345 if validate_shape else tensor_shape.unknown_shape()) /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variable_scope.py in () 768 initializer = initializer(dtype=dtype) 769 init_val = lambda: initializer( # pylint: disable=g-long-lambda --> 770 shape.as_list(), dtype=dtype, partition_info=partition_info) 771 variable_dtype = dtype.base_dtype 772 /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py in __call__(self, shape, dtype, partition_info, verify_shape) 215 verify_shape = self._verify_shape 216 return constant_op.constant( --> 217 self.value, dtype=dtype, shape=shape, verify_shape=verify_shape) 218 219 def get_config(self): /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/constant_op.py in constant(value, dtype, shape, name, verify_shape) 212 tensor_value.tensor.CopyFrom( 213 tensor_util.make_tensor_proto( --> 214 value, dtype=dtype, shape=shape, verify_shape=verify_shape)) 215 dtype_value = attr_value_pb2.AttrValue(type=tensor_value.tensor.dtype) 216 const_tensor = g.create_op( /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py in make_tensor_proto(values, dtype, shape, verify_shape) 430 nparray = np.empty(shape, dtype=np_dt) 431 else: --> 432 _AssertCompatible(values, dtype) 433 nparray = np.array(values, dtype=np_dt) 434 # check to them. /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_util.py in _AssertCompatible(values, dtype) 341 else: 342 raise TypeError("Expected %s, got %s of type '%s' instead." % --> 343 (dtype.name, repr(mismatch), type(mismatch).__name__)) 344 345 TypeError: Expected float32, got array([[-1.02797659e+00, -9.42098328e-01, -1.22963677e-01, -9.77690483e-02, -6.44761162e-01, 4.95661043e-01, 9.45945199e-01, 1.10781378e+00, 1.04832043e+00, 1.21294856e+00, -2.54501395e-01, 9.67412186e-01, -1.61325646e-02, -2.35181906e-01, -1.11473375e-01, 5.56497551e-02, 1.84811282e-01, -9.57602236e-02, 3.92767670e+00, 6.56584689e-01, 1.65810533e-01, 2.03198950e+00, 3.55179296e-01, -8.23342989e-01, 3.46493882e-01, 7.62892526e-01, 1.93877572e+00, -7.92695746e-01, 6.78272589e-01, 8.21668829e-01, 3.02422999e-02, 1.20142620e+00, 1.96522490e+00, -5.47777046e-01, 1.10027005e-01, -1.97859678e+00, 8.78057601e-01, 4.70891659e-01, 9.65760332e-02, 6.00581561e-01], [ 1.51440270e+00, 9.93542691e-01, -1.05897562e+00, 2.71166540e-01, -2.61983947e-01, 3.21831631e-01, -1.13355891e+00, 2.83847705e-01, 7.51351527e-01, 7.57394894e-02, 2.04441185e+00, 8.68535670e-01, 7.67671140e-02, 2.18017779e+00, -1.13754997e+00, -4.73867700e-01, 6.21912934e-01, 4.87664876e-01, 8.52996205e-01, 2.59225860e-01, 1.61354887e+00, 2.86519505e-01, -2.64336360e-01, 7.89488142e-01, 1.80248150e+00, 5.04290352e-01, 3.62016592e-02, 7.97785537e-01, 2.24898900e-01, 1.37927207e+00, 4.57148605e-01, -6.41607950e-01, 8.64078741e-02, 2.35877293e+00, -2.82005769e-01, 4.48125750e-01, -2.11468443e+00, -6.64142681e-01, 7.45880915e-01, -1.03430811e+00], [ 1.02849024e+00, -1.03905332e+00, 9.54532735e-01, 5.76090144e-01, 5.13779921e-01, 5.98943708e-01, -1.50436644e+00, -3.79561461e-01, -1.37728683e+00, 2.71771639e-01, 8.77470971e-01, 3.03476066e-01, -6.28657256e-01, -3.38428462e-01, 5.33836143e-01, -4.69492677e-01, 6.62870566e-01, -4.89146870e-01, -4.91496258e-02, -1.19347944e+00, -1.00729236e+00, -2.37179471e+00, -2.14877580e-01, 6.61027303e-01, -2.84817404e-01, -1.74013878e+00, 2.67555102e-01, -7.86139001e-01, 3.11167077e-01, 3.65745389e-01, -1.00989406e-01, -6.62330753e-01, -2.17188455e-01, -3.49625433e-01, -6.92998875e-01, -1.53628383e+00, -3.58459301e-01, 1.65365732e+00, -8.77196924e-01, -1.23581384e+00], [ 2.31577070e+00, -7.65099151e-01, -2.24958849e-01, -4.93115592e-01, 1.11213514e+00, 2.77218107e+00, -2.61658136e-02, -2.41692895e-01, 4.44206566e-01, -5.55569590e-02, -1.03435012e+00, -2.42214288e-02, 7.17838550e-01, 5.24896426e-01, 4.18833143e-01, -3.33421067e-01, -6.07809324e-01, -2.92236224e-01, -8.51025958e-01, -2.70232880e-01, 3.58924432e-02, -7.70601748e-01, 2.66277928e-01, -1.59915185e-01, -1.33162007e-01, -2.70657474e+00, 9.97029916e-02, 2.08616708e+00, 6.94315165e-01, -2.20732915e+00, -7.74514270e-01, 1.43635771e+00, -3.40799591e+00, -1.27004439e+00, -1.82813180e-01, 1.14806366e+00, -9.32855792e-01, 2.45931218e-01, 1.05728801e+00, 3.19959600e-01], [-1.99341344e+00, -7.31428067e-01, 2.84338614e-01, 3.54287114e-01, -9.35631107e-01, 6.76297422e-01, -2.02102869e+00, 5.82711771e-01, 1.51118075e-01, 3.74055649e-01, -1.69701243e-01, -4.17038419e-01, -1.16740573e-01, -5.77424502e-02, 2.99095313e-01, 1.38500447e+00, 1.14996951e+00, 2.56933362e-01, 6.42118177e-01, -9.71021413e-01, 3.15798896e-01, 1.05733316e+00, 1.50812942e+00, -6.85514845e-01, -2.65995947e-02, -7.62635709e-01, -9.91341065e-01, -9.72735789e-01, -1.28976409e+00, -9.63148316e-01, -2.63244086e-01, -1.98432437e+00, -1.25459582e+00, -4.09688912e-01, 8.24170627e-01, 1.44968664e+00, -3.60833445e-01, -1.40338636e+00, -8.38046653e-02, 4.80563208e-01], [ 2.15512459e+00, -1.00670634e+00, 4.17923810e-01, 1.12407853e+00, -2.02869522e+00, -8.12064591e-01, 6.52412501e-02, -4.42873590e-01, -4.84788549e-01, -1.15085510e+00, -1.62358653e+00, 7.48256647e-01, 8.39841970e-01, -1.20245200e+00, 6.94558885e-01, 2.61032834e+00, -8.59554899e-01, 6.67117765e-01, 1.78904754e-01, 8.28650028e-02, -8.04251461e-01, 1.61115885e-01, -4.50845891e-02, 1.99026777e+00, -1.54479494e+00, -8.82779927e-01, 1.38229425e+00, -3.21505339e+00, 4.93377536e-01, -3.69703080e-01, 3.66588969e-01, 7.38391752e-01, 1.96730015e+00, 1.14534065e-01, -3.92130163e-01, 1.61646093e+00, -3.88192140e-01, 1.02788357e-01, -1.27876536e+00, 1.21185468e+00], [-1.35890094e+00, 4.43012444e-01, -6.96358452e-01, -5.80457345e-01, 2.27352557e-01, 6.96908744e-01, -4.79864601e-01, 2.25686639e-01, -1.75322095e+00, -2.69982119e-01, -6.53710314e-01, 1.22960599e+00, 9.23213131e-01, -4.23204289e-01, -2.19767504e+00, -6.51810715e-01, -3.22570070e-01, -3.48784616e-01, -2.68443640e+00, 1.11824340e+00, 1.41279023e+00, -1.16989965e+00, 8.41071955e-01, 8.27131937e-01, 1.37100465e+00, 3.58882021e-01, -1.35395333e+00, 3.32409893e-01, 6.24495825e-03, -1.18001970e+00, 2.39968564e+00, 5.08310199e-01, 3.93656576e-01, 6.85677770e-01, -2.32699267e-01, -1.94540542e+00, 1.49357622e+00, 9.98545970e-01, -7.63586119e-02, 6.90438590e-01], [-1.44081482e+00, -9.32355708e-01, 9.52182131e-02, -1.60890896e+00, 1.87994042e+00, 2.41059254e-01, -5.60421062e-01, 1.61036807e+00, 8.91934385e-02, -1.57206713e+00, -9.49948750e-01, -1.64836488e-01, -4.71966996e-01, -2.25023896e+00, 4.38141892e-01, 5.69434849e-01, 1.51984853e+00, -4.26475436e-01, -1.10163187e+00, 6.72524400e-02, -7.89250029e-01, 1.89596084e+00, 8.52534115e-02, -3.22027876e-01, -2.65857473e-01, 8.68005739e-01, -2.15283977e+00, -1.57559652e+00, 3.21981458e-01, 8.34784682e-01, 1.67094627e+00, -8.96491724e-01, 1.21937580e-01, -5.90911565e-01, -2.26214887e-01, -1.03214259e+00, -3.36608793e-01, -1.23378894e+00, -1.28378507e+00, -4.77168725e-01], [-1.13210391e+00, -1.47582478e+00, 4.87032464e-02, 1.61914756e+00, -3.41066866e-01, 2.72039211e-01, 1.28076082e+00, 1.55122640e+00, -1.44746145e+00, -7.43822582e-01, -1.27308152e+00, 2.87909159e-01, 5.85234333e-01, -1.09066122e+00, 2.76703210e-01, 1.25920623e+00, -1.23342645e+00, -2.92119418e-01, -4.45831545e-01, 4.69825622e-01, -1.52247500e+00, 2.54963365e+00, 6.70815009e-01, -1.28072744e+00, -8.04871343e-01, 1.24563378e+00, -9.32874966e-01, 7.35379863e-01, 5.48516979e-01, -6.87795431e-01, 6.88806406e-02, 3.91750010e-03, 1.03793294e+00, -2.14962200e+00, 2.41048959e-01, 1.27003417e+00, 1.18696309e+00, 3.21743842e-01, -5.49337787e-02, 3.54574982e-01], [ 1.15355688e+00, -2.09094586e-01, -7.40684908e-01, 2.19823470e-01, 3.82420202e-01, 1.02577647e+00, -1.13290155e+00, 1.03055628e+00, 8.30210584e-03, 1.72710264e-01, 1.43452925e+00, -1.97277328e+00, 9.88560153e-01, -7.42998104e-01, 3.41762900e-01, -9.00038846e-01, -1.28236199e+00, -1.06771185e+00, 1.85023409e+00, -5.18420435e-01, -1.55537151e-01, -6.37627128e-01, 8.11270681e-01, -5.88961222e-01, 2.15919854e-01, -3.61806061e-01, 1.52700075e+00, -1.92678605e-01, -1.78109874e-01, -3.09724589e-01, 2.19369710e+00, 2.78928202e-01, -6.20526394e-01, 6.88796692e-01, -3.51070583e-01, 8.84681555e-01, 1.37907548e+00, -1.35369385e+00, 6.85827958e-01, -4.46634893e-01], [ 7.81461437e-01, 3.98854415e-01, -2.39450341e+00, 2.10210445e-01, -8.65177620e-01, 5.84544105e-01, -1.48777662e-01, -1.96932877e+00, 5.43588934e-01, 8.58011985e-01, 9.13398594e-01, 1.38081453e+00, 8.16820060e-01, -1.24521447e+00, 1.39850853e+00, 1.59946299e-01, 1.04301117e+00, -1.90836746e+00, -2.45807545e-01, -6.14243812e-01, -1.17526040e+00, -5.76982729e-01, -1.60268435e+00, -5.27351040e-01, 2.47396724e-01, -7.25712897e-01, -2.43596945e-01, -3.15778510e-02, 5.60033790e-01, -1.56599185e-01, -9.04008993e-01, -8.87393193e-01, 9.69447948e-01, 7.64118657e-01, -1.77908524e-01, 5.71081677e-01, 1.52853309e+00, 4.81800906e-02, -1.08762713e+00, -1.80801462e-01], [-9.12446248e-01, -1.51937584e+00, 4.35143858e-01, 2.69929387e+00, 8.77698707e-01, 3.33019450e-01, -1.30111273e-01, -7.31546874e-01, 1.16350854e+00, 3.24679653e-01, -7.16554187e-01, 3.23289877e-01, 1.38878363e+00, -1.91879979e-01, 7.55361606e-01, -5.27920464e-01, -5.60759359e-01, -6.44814401e-01, -9.58298383e-01, -8.23263387e-02, -1.20737954e+00, -1.34591219e+00, 2.39598697e+00, -5.60584025e-01, 1.18679615e+00, -1.79721461e-01, -4.18654815e-01, -4.00080650e-01, 4.29828790e-01, -9.03133848e-01, 2.90954156e-01, 5.28854422e-01, 1.20876280e+00, 1.02147435e+00, 7.98400424e-01, 1.07022054e-01, -2.48597224e+00, 2.14732104e+00, 1.01602135e+00, 1.11366572e+00], [-7.89607502e-01, -1.47606516e+00, 6.28031123e-01, -5.70634089e-01, -2.19020889e+00, 6.59530056e-01, 2.92021134e-01, 1.75545294e+00, 8.22844869e-01, -8.09959162e-01, 6.46691832e-01, 4.87568459e-01, 1.22671150e+00, 1.61115710e-01, 1.12616174e-01, -6.17397766e-01, -7.35870920e-01, 2.04552098e-01, 9.15410729e-01, 2.08659584e+00, 1.78057711e+00, -9.48824469e-01, 2.60963765e-01, 8.87377571e-01, 4.44660905e-01, 9.80138264e-01, 1.34677746e+00, -6.84309376e-01, 1.22445722e+00, 2.13743237e-01, -1.39178880e+00, 6.06887439e-01, -1.09445683e+00, -5.05536306e-01, -1.61752991e+00, 1.54725650e-01, 2.00225842e-01, 2.22425516e+00, 2.41402139e-01, 2.95601809e-01], [-1.11516385e+00, 1.49058127e-01, 2.02423869e-01, 2.13446646e-01, -1.10818502e+00, -9.05276058e-01, 2.24896493e-01, 2.11936553e-02, 1.15918631e+00, -1.46471904e-01, -1.61039779e-01, -1.41765856e+00, -1.73778603e-02, 9.02125955e-01, 1.56596382e+00, -5.20446592e-01, 9.13166786e-01, -1.20591646e+00, -6.37578878e-01, -1.04804111e+00, -3.68496767e-01, -2.07567965e+00, -1.35017135e-01, -5.62196973e-01, 8.44794018e-01, -4.28338073e-01, 5.54416630e-01, 8.07156086e-01, -1.18686227e+00, 8.89027674e-01, -3.39298737e-01, -1.36890576e+00, -1.57012505e+00, -1.04289179e+00, 4.95139315e-01, -1.00969453e-01, 1.22036422e+00, 5.81898215e-01, -3.28365949e-01, 1.00665401e+00], [ 3.58094905e-01, 6.22400548e-01, 1.73105094e+00, 6.85906805e-01, -5.08695220e-01, -1.55506796e+00, 1.82236441e+00, -3.72276542e-01, -7.17269012e-01, -2.00244208e+00, -3.41434194e-01, 1.13970162e+00, -7.73151328e-01, 2.11585523e+00, 7.40710298e-01, -6.05023398e-01, -8.57969827e-01, -5.44293891e-01, -1.14873586e+00, -9.53023511e-01, 7.62477663e-01, 1.66247161e+00, -2.97599968e-01, 4.27034413e-01, 7.08693515e-01, -1.35334115e+00, 3.06843579e-01, -4.67217973e-01, -9.58963100e-01, -1.13411023e+00, 5.66120883e-01, -2.75025709e-01, -8.39129337e-01, 1.97143877e+00, -1.26591649e+00, 1.58685057e+00, 2.22631612e-01, 2.03741245e-01, 1.37374900e+00, 2.26871254e-01]]) of type 'ndarray' instead. Please help
stgrue commented 6 years ago

Since the maintainers of this repository have already solved the LSTM conversion problem, I have not continued working on this issue, so I cannot help you with your problem. Sorry.