mil-tokyo / webdnn

The Fastest DNN Running Framework on Web Browser
https://mil-tokyo.github.io/webdnn
Other
1.98k stars 146 forks source link

Tensorflow inception V3 too many values to unpack (expected 2) #863

Open DavidGOrtega opened 6 years ago

DavidGOrtega commented 6 years ago

Hi,

Im trying to convert a Tensorflow pb file for testing purposes. The model is inception_v3_2016_08_28_frozen.pb that can be found here:

https://www.tensorflow.org/tutorials/image_recognition

import tensorflow as tf
from webdnn.frontend.tensorflow import TensorFlowConverter
from webdnn.backend import generate_descriptor

model_path  = 'inception_v3_2016_08_28_frozen.pb'
out_path    = './../models/inception_v3_2016_08_28_frozen'

with tf.gfile.GFile(model_path, "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())

with tf.Graph().as_default() as graph:
    tf.import_graph_def(graph_def,
      input_map=None,
      return_elements=None,
      name="")

tfsession   = tf.Session(graph=graph)

in_1        = tfsession.graph.get_tensor_by_name("input:0")
out_1       = tfsession.graph.get_tensor_by_name("InceptionV3/Predictions/Softmax:0")

print( [in_1, out_1])

graph       = TensorFlowConverter(tfsession).convert([in_1], [out_1])

exec_info   = generate_descriptor("webgpu", graph)  # also "webassembly", "webgl", "fallback" are available.
exec_info.save(out_path)
/usr/local/lib/python3.6/site-packages/webdnn-1.2.5-py3.6.egg/webdnn/util/console.py:30: Warning: [KerasConverter] keras.layers.AveragePooling computes average by dividing number of valid elements in window (without padding element), but WebDNN divides it by the number of elements including padding element, so different result will be generated on the edge.
  warnings.warn(message, category)
Traceback (most recent call last):
  File "tftest.py", line 51, in <module>
    convert_graph()
  File "tftest.py", line 41, in convert_graph
    graph       = TensorFlowConverter(tfsession).convert([in_1], [out_1])
  File "/usr/local/lib/python3.6/site-packages/webdnn-1.2.5-py3.6.egg/webdnn/frontend/tensorflow/converter.py", line 96, in convert
    self._convert_operator(op)
  File "/usr/local/lib/python3.6/site-packages/webdnn-1.2.5-py3.6.egg/webdnn/frontend/converter.py", line 117, in _convert_operator
    self._handler_map[self.__class__.__name__][operator_key](self, operator)
  File "/usr/local/lib/python3.6/site-packages/webdnn-1.2.5-py3.6.egg/webdnn/frontend/tensorflow/ops/gen_array_ops.py", line 83, in concat_v2_handler
    for x0, x1 in itertools.permutations(xs):
ValueError: too many values to unpack (expected 2)
milhidaka commented 6 years ago

Thanks for reporting. I implemented a patch and your script runs without error. https://github.com/mil-tokyo/webdnn/tree/issue863 Please try it (I cannot confirm the converted works well).

DavidGOrtega commented 6 years ago

Thanks! I will tell you

DavidGOrtega commented 6 years ago

Works! However the returned type seems to be Int32Array while I was expecting Float32Array, It's that right? Is there any option to enforce it?

Also, there are some backends that can not be generated, like webgpu, and assembly but I think you already know that.

milhidaka commented 6 years ago

In my environment, graph descriptor for all backend can be generated. For webassembly, emscripten have to be set up. https://mil-tokyo.github.io/webdnn/docs/tutorial/setup.html#installing-emscripten-and-eigen

Where did you get Int32Array? Arrays are Float32 in default, unless explicitly specified. https://github.com/mil-tokyo/webdnn/blob/91369e4f0e1b81e86c13ad1ebbd499af65014059/src/descriptor_runner/image/image_array.ts#L76