lutzroeder / netron

Visualizer for neural network, deep learning and machine learning models
https://netron.app
MIT License
27.95k stars 2.77k forks source link

Tensor shape inference #71

Open lutzroeder opened 6 years ago

Flamefire commented 6 years ago

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)
ysh329 commented 5 years ago

hope support shape infer. 😆

lovettchris commented 5 years ago

this would be very useful...

suntao2012 commented 5 years ago

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

it does work on onnx, but is there any way to add infer shape on mxnet json/model?

lutzroeder commented 5 years ago

@suntao2012 Netron runs in the browser without any Python dependencies.

lgeiger commented 4 years ago

It would be excellent if netron would be able to show the shape of the activations between two layers. Similar to the way it shows the shape of the input layer:

Screenshot 2019-11-14 at 11 01 48

This would be incredibly useful when developing models and seems to be pretty viable to implement for the Keras backend since all the Tensor shape information should be available.

ghost commented 4 years ago

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

Very helpful, thank you @FlameFire.

Left is without shape inference. Right is with shape inference.

Screen Shot 2020-03-04 at 8 18 02 PM

One important thing to note. Currently, for the above to work, you must use opset version <9. The above was generated with opset version 8. I checked opset 7 also. Both worked fine.

At present, for opset >=9, shapes will not be included and will not show as pointed out here:

Ah, you mean for opset9 or better. That change basically removed constants from inputs since inputs are not constants. In older onnx versions constants had to be part of the inputs, in opset9 that changed. Possible a onnx issue. https://github.com/onnx/tensorflow-onnx/issues/674#issuecomment-523965804

lookup1980 commented 4 years ago

@dsplabs thank you for the comment, and it's very helpful!

My understanding is that we can't set opset version when exporting Pytorch to ONNX, right?

ghost commented 4 years ago

@lookup1980 yes we can, by setting the opset_version argument, e.g.:

torch.onnx.export(model, model_inputs, onnx_file, opset_version=8)

Works for me in PyTorch version 1.4.

If you need to convert existing ONNX file, you can do so using: onnx.version_converter.convert_version(...). I usually throw-in also onnx.utils.polish_model(...) which (among other things) does shape inference using onnx.shape_inference.infer_shapes(...), e.g.:

import onnx
import onnx.utils
import onnx.version_converter

model_file = 'model.onnx'
onnx_model = onnx.load(model_file)
onnx_model = onnx.version_converter.convert_version(onnx_model, target_version=8)
onnx_model = onnx.utils.polish_model(onnx_model)
onnx.save(onnx_model, model_file)
fzyzcjy commented 4 years ago

Does not work for me...

model:

    model = keras.applications.mobilenet.MobileNet(
        # input_shape=(32, 32, 1),
        input_shape=(224, 224, 3),
        weights=None,
        include_top=False,
    )

    model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
                  loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                  metrics=['accuracy'])

Code:

keras_model = model

onnx_model = onnxmltools.convert_keras(keras_model)

onnx_model = onnx.shape_inference.infer_shapes(onnx_model)

p = dtemp / 'temp.onnx'
# onnxmltools.utils.save_model(onnx_model, str(p))
onnx.save(onnx_model, str(p))

Result:

image

Thanks for any suggestions!

sizhky commented 4 years ago

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

@Flamefire Running this kills jupyter kernel. I was trying to load and save an inception model in this case.

soyebn commented 4 years ago

Nice feature addition. Some of the segmentation models need OpSet=11. Is there as way we can get this working for OpSet=11?

kobygold commented 1 year ago

Is there any estimate on when this feature could be added? I see it was requested long time ago, and it would be extremely useful! I see there is a workaround above for ONNX by creating a new version of the model with layer sizes. Is there a similar workaround for Keras .h5 files?

lutzroeder commented 1 year ago

@kobygold Keras files do not store inferred shapes. If you want to work on implementing this for Keras, acuity.js and darknet.js already have some support for reference.