Closed doru1004 closed 2 years ago
My finding: MNIST from PyTorch - Missing Flatten, LogSoftmax.
I hit the same assertion as @doru1004 for
bertsquad-8.onnx
and bertsquad-10.onnx
, both available here
and also for gpt2-10.onnx
, available here.
$ ./bin/onnx-mlir --EmitLib ./bertsquad-8.onnx
./bin/onnx-mlir: /home/xxxxx/miniconda3/lib/libtinfo.so.6: no version information available (required by ./bin/onnx-mlir)
onnx-mlir: /home/xxxxx/llvm-project/mlir/lib/IR/Value.cpp:22: mlir::Value::Value(mlir::Operation*, unsigned int): Assertion `op->getNumResults() > resultNo&& "invalid result number"' failed.
[1] 848382 abort (core dumped) ./bin/onnx-mlir --EmitLib ./bertsquad-8.onnx
$ ./bin/onnx-mlir --EmitLib ./bertsquad-10.onnx
./bin/onnx-mlir: /home/xxxxx/miniconda3/lib/libtinfo.so.6: no version information available (required by ./bin/onnx-mlir)
onnx-mlir: /home/xxxxx/llvm-project/mlir/lib/IR/Value.cpp:22: mlir::Value::Value(mlir::Operation*, unsigned int): Assertion `op->getNumResults() > resultNo&& "invalid result number"' failed.
[1] 856164 abort (core dumped) ./bin/onnx-mlir --EmitLib ./bertsquad-10.onnx
$ ./bin/onnx-mlir --EmitLib ./gpt2-10.onnx
./bin/onnx-mlir: /home/xxxxx/miniconda3/lib/libtinfo.so.6: no version information available (required by ./bin/onnx-mlir)
onnx-mlir: /home/xxxxx/llvm-project/mlir/lib/IR/Value.cpp:22: mlir::Value::Value(mlir::Operation*, unsigned int): Assertion `op->getNumResults() > resultNo&& "invalid result number"' failed.
[1] 856207 abort (core dumped) ./bin/onnx-mlir --EmitLib ./gpt2-10.onnx
ResNet breaks on shape inference pass. I notice that the output in basic MLIR is
func @main_graph(%arg0: tensor<1x3x224x224xf32>) -> tensor<*xf32> {
The output should be -> tensor<1x1000xf32>
There is a node in the graph called resnetv24_dense0_fwd
that is the output
Resnet50-v1 also is not working
% ./onnx-mlir --EmitMLIR resnet50-v1-7.onnx
not a ShapedType or not ranked
UNREACHABLE executed at /Users/xatter/code/compiler/llvm-project/mlir/lib/IR/StandardTypes.cpp:253!
zsh: abort ./onnx-mlir --EmitMLIR resnet50-v1-7.onnx
@tjingrant I think we are running a version of ResNet as part of the test suite, is that different from the one above?
@Xatter and @doru1004 , it appears that the ResNet version included in the tests is different than the onnx zoo or the onnx repo.
The version included by the tests is downloaded from here: wget https://s3.amazonaws.com/download.onnx/models/opset_9/resnet50.tar.gz
The download location is defined in this file:
onnx-mlir/third_party/onnx/onnx/backend/test/data/real/test_resnet50/data.json
This downloaded model works ok.
But When I try to EmitONNXIR for the implementations of resnet v1 and v2 available at the onnx repo. I get the following (different) errors:
wget https://github.com/onnx/models/blob/master/vision/classification/resnet/model/resnet50-v1-7.onnx?raw=true -O resnet50-v1.onnx
./onnx-mlir --EmitONNXIR resnet50-v1.onnx
onnx-mlir: /working_dir/llvm-project/mlir/include/mlir/IR/Types.h:308: U mlir::Type::cast() const [U = mlir::MemRefType]: Assertion `isa<U>()' failed.
Aborted (core dumped)
wget https://github.com/onnx/models/blob/master/vision/classification/resnet/model/resnet50-v2-7.onnxraw=true -O resnet50-v2.onnx
./onnx-mlir --EmitONNXIR resnet50-v2.onnx
error: unable to infer shape of operation without shape inference interface
error: Input data tensor not ranked
error: shape inference failed
error: Input tensor(s) not ranked
error: shape inference failed
error: Shape inference failed, 3 operations couldn't be inferred
Using the following versions of onnx-mlir, llvm-project, and protobuf:
git clone https://github.com/llvm/llvm-project.git
cd llvm-project && git checkout 91671e13efbc5dbd17b832d7973401350d0a6ee6 && cd ..
git clone --recursive https://github.com/onnx/onnx-mlir.git
cd onnx-mlir && git checkout --recurse-submodules 75930ffbcf14cfbaccd8417c47c3598f56342926 && cd ..
git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf && git checkout --recurse-submodules d16bf914bc5ba569d2b70376051d15f68ce4322d && cd ..
``
I wrote a script to get onnx model zoo coverage status. And I executed ONNX MLIR twice with different versions of docker image onnxmlirczar/onnx-mlir-build:x86.
ONNX MLIR Compiling Target: ONNX Model Zoo
17 of 118 onnx models can be compiled by onnx-mlir successfully.
0 | 1 | |
---|---|---|
zoo_git_url | git@github.com:onnx/models.git | git@github.com:onnx/models.git |
total_count | 118 | 118 |
success_count | 17 | 17 |
failed_count | 101 | 101 |
onnx_mlir_image_creation | 2020-10-12T21:37:40.936418611Z | 2020-09-23T19:47:15.588547807Z |
successed_onnx | [./models/vision/classification/mnist/model/mn... | [./models/vision/classification/mnist/model/mn... |
Image built in 2020-10-12T21:37:40.936418611Z
'/models/vision/classification/mnist/model/mnist-7.onnx', './models/vision/classification/mnist/model/mnist-8.onnx', './models/vision/classification/resnet/model/resnet50-caffe2-v1-6.onnx', './models/vision/classification/resnet/model/resnet50-caffe2-v1-7.onnx', './models/vision/classification/resnet/model/resnet50-caffe2-v1-8.onnx', './models/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx', './models/vision/classification/shufflenet/model/shufflenet-6.onnx', './models/vision/classification/shufflenet/model/shufflenet-7.onnx', './models/vision/classification/shufflenet/model/shufflenet-8.onnx', './models/vision/classification/shufflenet/model/shufflenet-9.onnx', './models/vision/classification/shufflenet/model/shufflenet-v2-10.onnx', './models/vision/classification/vgg/model/vgg19-caffe2-6.onnx', './models/vision/classification/vgg/model/vgg19-caffe2-7.onnx', './models/vision/classification/vgg/model/vgg19-caffe2-8.onnx', './models/vision/classification/vgg/model/vgg19-caffe2-9.onnx', './models/vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx', './models/vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx'
Successed In New Version which is failed in old version:
Successed In Old Version which is failed in new version
I took errors with "error:" prefix as expected errors, and "onnx-milr:" prefix as mlir assertion failure.
0 | 1 | |
---|---|---|
Expected Error | 64 | 62 |
Others | 3 | 3 |
mlir Failure | 34 | 36 |
I also categorize the errors with very rough way by source.
Source | 0 | 1 |
---|---|---|
AffineOps.cpp | 1 | 1 |
Attributes.cpp | 4 | 4 |
CHECK failed | 1 | 1 |
Casting.h | 2 | 2 |
ConstProp.cpp | 1 | 1 |
FrontendDialectHelper.cpp | 1 | 1 |
FrontendDialectTransformer.cpp | 3 | 3 |
Shape inference failed | 54 | 52 |
Types.h | 24 | 26 |
op operand must be tensor | 10 | 10 |
For more details, see attached pdf report
From latest build, it seems 40 Models are supported now.
Success compiled models:
./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.onnx
./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx
./models/vision/classification/mnist/model/mnist-7.onnx
./models/vision/classification/mnist/model/mnist-8.onnx
./models/vision/classification/mobilenet/model/mobilenetv2-7.onnx
./models/vision/classification/resnet/model/resnet101-v1-7.onnx
./models/vision/classification/resnet/model/resnet101-v2-7.onnx
./models/vision/classification/resnet/model/resnet152-v1-7.onnx
./models/vision/classification/resnet/model/resnet152-v2-7.onnx
./models/vision/classification/resnet/model/resnet18-v1-7.onnx
./models/vision/classification/resnet/model/resnet18-v2-7.onnx
./models/vision/classification/resnet/model/resnet34-v1-7.onnx
./models/vision/classification/resnet/model/resnet34-v2-7.onnx
./models/vision/classification/resnet/model/resnet50-caffe2-v1-6.onnx
./models/vision/classification/resnet/model/resnet50-caffe2-v1-7.onnx
./models/vision/classification/resnet/model/resnet50-caffe2-v1-8.onnx
./models/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx
./models/vision/classification/resnet/model/resnet50-v1-7.onnx
./models/vision/classification/resnet/model/resnet50-v2-7.onnx
./models/vision/classification/shufflenet/model/shufflenet-6.onnx
./models/vision/classification/shufflenet/model/shufflenet-7.onnx
./models/vision/classification/shufflenet/model/shufflenet-8.onnx
./models/vision/classification/shufflenet/model/shufflenet-9.onnx
./models/vision/classification/shufflenet/model/shufflenet-v2-10.onnx
./models/vision/classification/squeezenet/model/squeezenet1.0-3.onnx
./models/vision/classification/squeezenet/model/squeezenet1.0-6.onnx
./models/vision/classification/squeezenet/model/squeezenet1.0-7.onnx
./models/vision/classification/squeezenet/model/squeezenet1.0-8.onnx
./models/vision/classification/squeezenet/model/squeezenet1.0-9.onnx
./models/vision/classification/squeezenet/model/squeezenet1.1-7.onnx
./models/vision/classification/vgg/model/vgg16-7.onnx
./models/vision/classification/vgg/model/vgg16-bn-7.onnx
./models/vision/classification/vgg/model/vgg19-7.onnx
./models/vision/classification/vgg/model/vgg19-bn-7.onnx
./models/vision/classification/vgg/model/vgg19-caffe2-6.onnx
./models/vision/classification/vgg/model/vgg19-caffe2-7.onnx
./models/vision/classification/vgg/model/vgg19-caffe2-8.onnx
./models/vision/classification/vgg/model/vgg19-caffe2-9.onnx
./models/vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx
./models/vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx
Update:
MLIR now has
Update:
Until Feb-20th, 77 models can be compiled.
Models | Compilation Success |
---|---|
./models/text/machine_comprehension/bert-squad/model/bertsquad-10.onnx | FALSE |
./models/text/machine_comprehension/bert-squad/model/bertsquad-8.onnx | FALSE |
./models/text/machine_comprehension/bidirectional_attention_flow/model/bidaf-9.onnx | FALSE |
./models/text/machine_comprehension/gpt-2/model/gpt2-10.onnx | FALSE |
./models/text/machine_comprehension/gpt-2/model/gpt2-lm-head-10.onnx | FALSE |
./models/text/machine_comprehension/roberta/model/roberta-base-11.onnx | FALSE |
./models/text/machine_comprehension/roberta/model/roberta-sequence-classification-9.onnx | FALSE |
./models/text/machine_comprehension/t5/model/t5-decoder-with-lm-head-12.onnx | FALSE |
./models/text/machine_comprehension/t5/model/t5-encoder-12.onnx | FALSE |
./models/vision/body_analysis/arcface/model/arcfaceresnet100-8.onnx | TRUE |
./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-2.onnx | FALSE |
./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.onnx | TRUE |
./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx | TRUE |
./models/vision/classification/alexnet/model/bvlcalexnet-3.onnx | FALSE |
./models/vision/classification/alexnet/model/bvlcalexnet-6.onnx | TRUE |
./models/vision/classification/alexnet/model/bvlcalexnet-7.onnx | TRUE |
./models/vision/classification/alexnet/model/bvlcalexnet-8.onnx | TRUE |
./models/vision/classification/alexnet/model/bvlcalexnet-9.onnx | TRUE |
./models/vision/classification/caffenet/model/caffenet-3.onnx | TRUE |
./models/vision/classification/caffenet/model/caffenet-6.onnx | FALSE |
./models/vision/classification/caffenet/model/caffenet-7.onnx | TRUE |
./models/vision/classification/caffenet/model/caffenet-8.onnx | TRUE |
./models/vision/classification/caffenet/model/caffenet-9.onnx | TRUE |
./models/vision/classification/densenet-121/model/densenet-3.onnx | FALSE |
./models/vision/classification/densenet-121/model/densenet-6.onnx | TRUE |
./models/vision/classification/densenet-121/model/densenet-7.onnx | TRUE |
./models/vision/classification/densenet-121/model/densenet-8.onnx | TRUE |
./models/vision/classification/densenet-121/model/densenet-9.onnx | TRUE |
./models/vision/classification/efficientnet-lite4/model/efficientnet-lite4-11.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/googlenet/model/googlenet-3.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/googlenet/model/googlenet-6.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/googlenet/model/googlenet-7.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/googlenet/model/googlenet-8.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/googlenet/model/googlenet-9.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-3.onnx | FALSE |
./models/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-6.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-7.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-8.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-9.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-3.onnx | FALSE |
./models/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-6.onnx | FALSE |
./models/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-7.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-8.onnx | TRUE |
./models/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-9.onnx | TRUE |
./models/vision/classification/mnist/model/mnist-1.onnx | FALSE |
./models/vision/classification/mnist/model/mnist-7.onnx | TRUE |
./models/vision/classification/mnist/model/mnist-8.onnx | TRUE |
./models/vision/classification/mobilenet/model/mobilenetv2-7.onnx | TRUE |
./models/vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-3.onnx | FALSE |
./models/vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-6.onnx | TRUE |
./models/vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-7.onnx | TRUE |
./models/vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-8.onnx | TRUE |
./models/vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-9.onnx | TRUE |
./models/vision/classification/resnet/model/resnet101-v1-7.onnx | TRUE |
./models/vision/classification/resnet/model/resnet101-v2-7.onnx | TRUE |
./models/vision/classification/resnet/model/resnet152-v1-7.onnx | TRUE |
./models/vision/classification/resnet/model/resnet152-v2-7.onnx | TRUE |
./models/vision/classification/resnet/model/resnet18-v1-7.onnx | TRUE |
./models/vision/classification/resnet/model/resnet18-v2-7.onnx | TRUE |
./models/vision/classification/resnet/model/resnet34-v1-7.onnx | TRUE |
./models/vision/classification/resnet/model/resnet34-v2-7.onnx | TRUE |
./models/vision/classification/resnet/model/resnet50-caffe2-v1-3.onnx | FALSE |
./models/vision/classification/resnet/model/resnet50-caffe2-v1-6.onnx | TRUE |
./models/vision/classification/resnet/model/resnet50-caffe2-v1-7.onnx | TRUE |
./models/vision/classification/resnet/model/resnet50-caffe2-v1-8.onnx | TRUE |
./models/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx | TRUE |
./models/vision/classification/resnet/model/resnet50-v1-7.onnx | TRUE |
./models/vision/classification/resnet/model/resnet50-v2-7.onnx | TRUE |
./models/vision/classification/shufflenet/model/shufflenet-3.onnx | FALSE |
./models/vision/classification/shufflenet/model/shufflenet-6.onnx | TRUE |
./models/vision/classification/shufflenet/model/shufflenet-7.onnx | TRUE |
./models/vision/classification/shufflenet/model/shufflenet-8.onnx | TRUE |
./models/vision/classification/shufflenet/model/shufflenet-9.onnx | TRUE |
./models/vision/classification/shufflenet/model/shufflenet-v2-10.onnx | TRUE |
./models/vision/classification/squeezenet/model/squeezenet1.0-3.onnx | TRUE |
./models/vision/classification/squeezenet/model/squeezenet1.0-6.onnx | TRUE |
./models/vision/classification/squeezenet/model/squeezenet1.0-7.onnx | TRUE |
./models/vision/classification/squeezenet/model/squeezenet1.0-8.onnx | TRUE |
./models/vision/classification/squeezenet/model/squeezenet1.0-9.onnx | TRUE |
./models/vision/classification/squeezenet/model/squeezenet1.1-7.onnx | TRUE |
./models/vision/classification/vgg/model/vgg16-7.onnx | TRUE |
./models/vision/classification/vgg/model/vgg16-bn-7.onnx | TRUE |
./models/vision/classification/vgg/model/vgg19-7.onnx | TRUE |
./models/vision/classification/vgg/model/vgg19-bn-7.onnx | TRUE |
./models/vision/classification/vgg/model/vgg19-caffe2-3.onnx | FALSE |
./models/vision/classification/vgg/model/vgg19-caffe2-6.onnx | TRUE |
./models/vision/classification/vgg/model/vgg19-caffe2-7.onnx | TRUE |
./models/vision/classification/vgg/model/vgg19-caffe2-8.onnx | TRUE |
./models/vision/classification/vgg/model/vgg19-caffe2-9.onnx | TRUE |
./models/vision/classification/zfnet-512/model/zfnet512-3.onnx | FALSE |
./models/vision/classification/zfnet-512/model/zfnet512-6.onnx | TRUE |
./models/vision/classification/zfnet-512/model/zfnet512-7.onnx | TRUE |
./models/vision/classification/zfnet-512/model/zfnet512-8.onnx | TRUE |
./models/vision/classification/zfnet-512/model/zfnet512-9.onnx | TRUE |
./models/vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx | TRUE |
./models/vision/object_detection_segmentation/faster-rcnn/model/FasterRCNN-10.onnx | FALSE |
./models/vision/object_detection_segmentation/mask-rcnn/model/MaskRCNN-10.onnx | FALSE |
./models/vision/object_detection_segmentation/retinanet/model/retinanet-9.onnx | FALSE |
./models/vision/object_detection_segmentation/ssd-mobilenetv1/model/ssd_mobilenet_v1_10.onnx | FALSE |
./models/vision/object_detection_segmentation/ssd/model/ssd-10.onnx | FALSE |
./models/vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-7.onnx | TRUE |
./models/vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-8.onnx | TRUE |
./models/vision/object_detection_segmentation/tiny-yolov3/model/tiny-yolov3-11.onnx | FALSE |
./models/vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx | TRUE |
./models/vision/object_detection_segmentation/yolov3/model/yolov3-10.onnx | FALSE |
./models/vision/object_detection_segmentation/yolov4/model/yolov4.onnx | FALSE |
./models/vision/style_transfer/fast_neural_style/model/candy-8.onnx | FALSE |
./models/vision/style_transfer/fast_neural_style/model/candy-9.onnx | FALSE |
./models/vision/style_transfer/fast_neural_style/model/mosaic-8.onnx | FALSE |
./models/vision/style_transfer/fast_neural_style/model/mosaic-9.onnx | FALSE |
./models/vision/style_transfer/fast_neural_style/model/pointilism-8.onnx | FALSE |
./models/vision/style_transfer/fast_neural_style/model/pointilism-9.onnx | FALSE |
./models/vision/style_transfer/fast_neural_style/model/rain-princess-8.onnx | FALSE |
./models/vision/style_transfer/fast_neural_style/model/rain-princess-9.onnx | FALSE |
./models/vision/style_transfer/fast_neural_style/model/udnie-8.onnx | FALSE |
./models/vision/style_transfer/fast_neural_style/model/udnie-9.onnx | FALSE |
./models/vision/super_resolution/sub_pixel_cnn_2016/model/super-resolution-10.onnx | TRUE |
any update?
FYI, I wrote a python script to examine the current status and below is the result. Will report the status monthly.
(@AlexandreEichenberger @chentong319 I added error messages when compilation failed)
['abs', 'acos', 'acosh', 'add', 'and', 'argmax', 'asin', 'asinh', 'atan', 'atanh', 'averagepool', 'batchnormalization', 'cast', 'ceil', 'clip', 'concat', 'constant', 'constantofshape', 'conv', 'cos', 'div', 'dropout', 'elu', 'erf', 'exp', 'flatten', 'floor', 'gather', 'gemm', 'globalaveragepool', 'globalmaxpool', 'gru', 'hardsigmoid', 'identity', 'leakyrelu', 'less', 'log', 'logsoftmax', 'loop', 'lrn', 'lstm', 'matmul', 'max', 'maxpool', 'min', 'mul', 'neg', 'or', 'pad', 'pow', 'prelu', 'range', 'reciprocal', 'reducel1', 'reducel2', 'reducelogsum', 'reducelogsumexp', 'reducemax', 'reducemean', 'reducemin', 'reduceprod', 'reducesum', 'reducesumsquare', 'relu', 'reshape', 'resize', 'rnn', 'scan', 'selu', 'shape', 'sigmoid', 'sign', 'sin', 'sinh', 'size', 'slice', 'softmax', 'softplus', 'softsign', 'split', 'sqrt', 'squeeze', 'sub', 'sum', 'tan', 'tanh', 'tile', 'transpose', 'unsqueeze', 'xor']
[1] processing vision/style_transfer/fast_neural_style/model/candy-8.onnx [2] processing vision/style_transfer/fast_neural_style/model/udnie-9.onnx [3] processing vision/style_transfer/fast_neural_style/model/mosaic-8.onnx [4] processing vision/style_transfer/fast_neural_style/model/mosaic-9.onnx [5] processing vision/style_transfer/fast_neural_style/model/rain-princess-8.onnx [6] processing vision/style_transfer/fast_neural_style/model/pointilism-9.onnx [7] processing vision/style_transfer/fast_neural_style/model/pointilism-8.onnx [8] processing vision/style_transfer/fast_neural_style/model/candy-9.onnx [9] processing vision/style_transfer/fast_neural_style/model/udnie-8.onnx [10] processing vision/style_transfer/fast_neural_style/model/rain-princess-9.onnx [11] processing vision/object_detection_segmentation/fcn/model/fcn-resnet101-11.onnx [12] processing vision/object_detection_segmentation/fcn/model/fcn-resnet50-11.onnx [13] processing vision/object_detection_segmentation/yolov4/model/yolov4.onnx [14] processing vision/object_detection_segmentation/yolov3/model/yolov3-10.onnx [15] processing vision/object_detection_segmentation/mask-rcnn/model/MaskRCNN-10.onnx [16] processing vision/object_detection_segmentation/retinanet/model/retinanet-9.onnx [17] processing vision/object_detection_segmentation/faster-rcnn/model/FasterRCNN-10.onnx [18] processing vision/object_detection_segmentation/ssd/model/ssd-10.onnx [19] processing vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-7.onnx [20] processing vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-8.onnx [21] processing vision/object_detection_segmentation/tiny-yolov3/model/tiny-yolov3-11.onnx [22] processing vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx [23] processing vision/object_detection_segmentation/ssd-mobilenetv1/model/ssd_mobilenet_v1_10.onnx [24] processing vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx [25] processing vision/body_analysis/age_gender/models/vgg_ilsvrc_16_age_imdb_wiki.onnx [26] processing vision/body_analysis/age_gender/models/age_googlenet.onnx [27] processing vision/body_analysis/age_gender/models/gender_googlenet.onnx [28] processing vision/body_analysis/age_gender/models/vgg_ilsvrc_16_gender_imdb_wiki.onnx [29] processing vision/body_analysis/age_gender/models/vgg_ilsvrc_16_age_chalearn_iccv2015.onnx [30] processing vision/body_analysis/arcface/model/arcfaceresnet100-8.onnx [31] processing vision/body_analysis/emotion_ferplus/model/emotion-ferplus-2.onnx [32] processing vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.onnx [33] processing vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx [34] processing vision/body_analysis/ultraface/models/version-RFB-640.onnx [35] processing vision/body_analysis/ultraface/models/version-RFB-320.onnx [36] processing vision/classification/vgg/model/vgg19-7.onnx [37] processing vision/classification/vgg/model/vgg19-caffe2-6.onnx [38] processing vision/classification/vgg/model/vgg19-bn-7.onnx [39] processing vision/classification/vgg/model/vgg19-caffe2-7.onnx [40] processing vision/classification/vgg/model/vgg19-caffe2-3.onnx [41] processing vision/classification/vgg/model/vgg19-caffe2-8.onnx [42] processing vision/classification/vgg/model/vgg19-caffe2-9.onnx [43] processing vision/classification/vgg/model/vgg16-7.onnx [44] processing vision/classification/vgg/model/vgg16-bn-7.onnx [45] processing vision/classification/mobilenet/model/mobilenetv2-7.onnx [46] processing vision/classification/squeezenet/model/squeezenet1.0-3.onnx [47] processing vision/classification/squeezenet/model/squeezenet1.0-9.onnx [48] processing vision/classification/squeezenet/model/squeezenet1.0-7.onnx [49] processing vision/classification/squeezenet/model/squeezenet1.0-6.onnx [50] processing vision/classification/squeezenet/model/squeezenet1.1-7.onnx [51] processing vision/classification/squeezenet/model/squeezenet1.0-8.onnx [52] processing vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-8.onnx [53] processing vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-3.onnx [54] processing vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-7.onnx [55] processing vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-9.onnx [56] processing vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-6.onnx [57] processing vision/classification/caffenet/model/caffenet-7.onnx [58] processing vision/classification/caffenet/model/caffenet-3.onnx [59] processing vision/classification/caffenet/model/caffenet-6.onnx [60] processing vision/classification/caffenet/model/caffenet-9.onnx [61] processing vision/classification/caffenet/model/caffenet-8.onnx [62] processing vision/classification/densenet-121/model/densenet-9.onnx [63] processing vision/classification/densenet-121/model/densenet-7.onnx [64] processing vision/classification/densenet-121/model/densenet-8.onnx [65] processing vision/classification/densenet-121/model/densenet-3.onnx [66] processing vision/classification/densenet-121/model/densenet-6.onnx [67] processing vision/classification/mnist/model/mnist-7.onnx [68] processing vision/classification/mnist/model/mnist-1.onnx [69] processing vision/classification/mnist/model/mnist-8.onnx [70] processing vision/classification/efficientnet-lite4/model/efficientnet-lite4-11.onnx [71] processing vision/classification/alexnet/model/bvlcalexnet-3.onnx [72] processing vision/classification/alexnet/model/bvlcalexnet-9.onnx [73] processing vision/classification/alexnet/model/bvlcalexnet-8.onnx [74] processing vision/classification/alexnet/model/bvlcalexnet-6.onnx [75] processing vision/classification/alexnet/model/bvlcalexnet-7.onnx [76] processing vision/classification/resnet/model/resnet34-v2-7.onnx [77] processing vision/classification/resnet/model/resnet18-v2-7.onnx [78] processing vision/classification/resnet/model/resnet50-caffe2-v1-8.onnx [79] processing vision/classification/resnet/model/resnet50-v2-7.onnx [80] processing vision/classification/resnet/model/resnet34-v1-7.onnx [81] processing vision/classification/resnet/model/resnet101-v1-7.onnx [82] processing vision/classification/resnet/model/resnet101-v2-7.onnx [83] processing vision/classification/resnet/model/resnet50-v1-12-int8.onnx [84] processing vision/classification/resnet/model/resnet50-caffe2-v1-7.onnx [85] processing vision/classification/resnet/model/resnet50-v1-7.onnx [86] processing vision/classification/resnet/model/resnet152-v1-7.onnx [87] processing vision/classification/resnet/model/resnet18-v1-7.onnx [88] processing vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx [89] processing vision/classification/resnet/model/resnet50-v1-12.onnx [90] processing vision/classification/resnet/model/resnet50-caffe2-v1-3.onnx [91] processing vision/classification/resnet/model/resnet152-v2-7.onnx [92] processing vision/classification/resnet/model/resnet50-caffe2-v1-6.onnx [93] processing vision/classification/zfnet-512/model/zfnet512-6.onnx [94] processing vision/classification/zfnet-512/model/zfnet512-7.onnx [95] processing vision/classification/zfnet-512/model/zfnet512-8.onnx [96] processing vision/classification/zfnet-512/model/zfnet512-3.onnx [97] processing vision/classification/zfnet-512/model/zfnet512-9.onnx [98] processing vision/classification/shufflenet/model/shufflenet-6.onnx [99] processing vision/classification/shufflenet/model/shufflenet-7.onnx [100] processing vision/classification/shufflenet/model/shufflenet-3.onnx [101] processing vision/classification/shufflenet/model/shufflenet-8.onnx [102] processing vision/classification/shufflenet/model/shufflenet-v2-10.onnx [103] processing vision/classification/shufflenet/model/shufflenet-9.onnx [104] processing vision/classification/inception_and_googlenet/googlenet/model/googlenet-9.onnx [105] processing vision/classification/inception_and_googlenet/googlenet/model/googlenet-6.onnx [106] processing vision/classification/inception_and_googlenet/googlenet/model/googlenet-7.onnx [107] processing vision/classification/inception_and_googlenet/googlenet/model/googlenet-8.onnx [108] processing vision/classification/inception_and_googlenet/googlenet/model/googlenet-3.onnx [109] processing vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-7.onnx [110] processing vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-9.onnx [111] processing vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-6.onnx [112] processing vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-8.onnx [113] processing vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-3.onnx [114] processing vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-8.onnx [115] processing vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-9.onnx [116] processing vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-6.onnx [117] processing vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-7.onnx [118] processing vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-3.onnx [119] processing vision/super_resolution/sub_pixel_cnn_2016/model/super-resolution-10.onnx [120] processing text/machine_comprehension/t5/model/t5-decoder-with-lm-head-12.onnx [121] processing text/machine_comprehension/t5/model/t5-encoder-12.onnx [122] processing text/machine_comprehension/roberta/model/roberta-base-11.onnx [123] processing text/machine_comprehension/roberta/model/roberta-sequence-classification-9.onnx [124] processing text/machine_comprehension/bidirectional_attention_flow/model/bidaf-9.onnx [125] processing text/machine_comprehension/gpt-2/model/gpt2-lm-head-10.onnx [126] processing text/machine_comprehension/gpt-2/model/gpt2-10.onnx [127] processing text/machine_comprehension/bert-squad/model/bertsquad-10.onnx [128] processing text/machine_comprehension/bert-squad/model/bertsquad-8.onnx
ONNX model | Ops in the model | Ops not supported in onnx-mlir | Compilable with onnx-mlir |
---|---|---|---|
age_googlenet.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
arcfaceresnet100-8.onnx | {'flatten', 'add', 'identity', 'mul', 'sub', 'reshape', 'dropout', 'batchnormalization', 'prelu', 'gemm', 'conv'} | {} | succeeded |
bertsquad-10.onnx | {'sqrt', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'reciprocal', 'onehot', 'unsqueeze', 'softmax', 'constantofshape', 'pow', 'identity', 'split', 'reducemean', 'mul', 'reshape', 'slice'} | {'onehot'} | error: onnx.OneHot: inferShapes() not implemented error: shape inference failed |
bertsquad-8.onnx | {'sqrt', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'softmax', 'reciprocal', 'unsqueeze', 'pow', 'tile', 'identity', 'split', 'reducemean', 'mul', 'reshape', 'slice'} | {} | onnx-mlir: /home/tungld/dl/llvm-project/mlir/lib/IR/AttributeDetail.h:115: static mlir::detail::DenseIntOrFPElementsAttrStorage::KeyTy mlir::detail::DenseIntOrFPElementsAttrStorage::getKey(mlir::ShapedType, llvm::ArrayRef |
bidaf-9.onnx | {'sub', 'squeeze', 'log', 'gather', 'shape', 'transpose', 'concat', 'clip', 'cast', 'add', 'compress', 'categorymapper', 'relu', 'dropout', 'matmul', 'hardmax', 'softmax', 'unsqueeze', 'argmax', 'constantofshape', 'sum', 'scan', 'abs', 'conv', 'mul', 'reshape', 'lstm', 'sigmoid', 'ceil', 'slice', 'reducemax', 'reducesum'} | {'compress', 'hardmax', 'categorymapper'} | onnx-mlir: /home/tungld/dl/onnx-mlir/src/Builder/SymbolTable.hpp:126: void onnx_mlir::SymbolMapping |
bvlcalexnet-3.onnx | {'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
bvlcalexnet-6.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
bvlcalexnet-7.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
bvlcalexnet-8.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
bvlcalexnet-9.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
caffenet-3.onnx | {'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
caffenet-6.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
caffenet-7.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
caffenet-8.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
caffenet-9.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
candy-8.onnx | {'add', 'relu', 'instancenormalization', 'upsample', 'pad', 'conv'} | {'instancenormalization', 'upsample'} | error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
candy-9.onnx | {'cast', 'add', 'mul', 'relu', 'floor', 'instancenormalization', 'shape', 'gather', 'upsample', 'slice', 'unsqueeze', 'div', 'pad', 'constant', 'conv', 'concat'} | {'instancenormalization', 'upsample'} | error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
densenet-3.onnx | {'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'averagepool', 'conv', 'globalaveragepool', 'concat'} | {} | onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed. |
densenet-6.onnx | {'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'averagepool', 'unsqueeze', 'conv', 'globalaveragepool', 'concat'} | {} | succeeded |
densenet-7.onnx | {'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'averagepool', 'unsqueeze', 'conv', 'globalaveragepool', 'concat'} | {} | succeeded |
densenet-8.onnx | {'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'averagepool', 'unsqueeze', 'conv', 'globalaveragepool', 'concat'} | {} | succeeded |
densenet-9.onnx | {'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'averagepool', 'unsqueeze', 'conv', 'globalaveragepool', 'concat'} | {} | succeeded |
efficientnet-lite4-11.onnx | {'clip', 'add', 'squeeze', 'matmul', 'batchnormalization', 'softmax', 'averagepool', 'conv', 'transpose'} | {} | succeeded |
emotion-ferplus-2.onnx | {'add', 'sub', 'reshape', 'relu', 'dropout', 'matmul', 'maxpool', 'div', 'conv', 'constant'} | {} | error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
emotion-ferplus-7.onnx | {'add', 'sub', 'reshape', 'relu', 'dropout', 'matmul', 'maxpool', 'div', 'conv'} | {} | succeeded |
emotion-ferplus-8.onnx | {'add', 'sub', 'reshape', 'relu', 'dropout', 'matmul', 'maxpool', 'div', 'conv'} | {} | succeeded |
fasterrcnn-10.onnx | {'topk', 'sqrt', 'sub', 'squeeze', 'log', 'roialign', 'gather', 'resize', 'shape', 'scatter', 'transpose', 'concat', 'cast', 'clip', 'add', 'greater', 'relu', 'softmax', 'unsqueeze', 'gemm', 'constant', 'exp', 'reducemin', 'constantofshape', 'nonzero', 'equal', 'conv', 'flatten', 'expand', 'mul', 'reshape', 'floor', 'maxpool', 'sigmoid', 'slice', 'div', 'nonmaxsuppression'} | {'topk', 'greater', 'expand', 'roialign', 'nonzero', 'equal', 'scatter', 'nonmaxsuppression'} | error: scales() and sizes() can not both None/not None error: shape inference failed |
fcn-resnet101-11.onnx | {'cast', 'add', 'relu', 'maxpool', 'shape', 'gather', 'slice', 'unsqueeze', 'resize', 'conv', 'constant', 'concat'} | {} | error: these modes() or coordinate_transformation_mode() not implemented yet error: shape inference failed |
fcn-resnet50-11.onnx | {'cast', 'add', 'relu', 'maxpool', 'shape', 'gather', 'slice', 'unsqueeze', 'resize', 'conv', 'constant', 'concat'} | {} | error: these modes() or coordinate_transformation_mode() not implemented yet error: shape inference failed |
gender_googlenet.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
googlenet-3.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
googlenet-6.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
googlenet-7.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
googlenet-8.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
googlenet-9.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
gpt2-10.onnx | {'sqrt', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'softmax', 'unsqueeze', 'gemm', 'constant', 'constantofshape', 'pow', 'split', 'nonzero', 'reducemean', 'mul', 'reshape', 'slice', 'div'} | {'nonzero'} | error: onnx.NonZero: inferShapes() not implemented error: shape inference failed |
gpt2-lm-head-10.onnx | {'sqrt', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'softmax', 'unsqueeze', 'gemm', 'constant', 'constantofshape', 'pow', 'where', 'split', 'nonzero', 'reducemean', 'mul', 'reshape', 'slice', 'div'} | {'where', 'nonzero'} | error: onnx.NonZero: inferShapes() not implemented error: shape inference failed |
inception-v1-3.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
inception-v1-6.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
inception-v1-7.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
inception-v1-8.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
inception-v1-9.onnx | {'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | succeeded |
inception-v2-3.onnx | {'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed. |
inception-v2-6.onnx | {'add', 'mul', 'relu', 'reshape', 'batchnormalization', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'} | {} | onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed. |
inception-v2-7.onnx | {'add', 'mul', 'relu', 'reshape', 'batchnormalization', 'maxpool', 'softmax', 'gemm', 'averagepool', 'unsqueeze', 'conv', 'concat'} | {} | succeeded |
inception-v2-8.onnx | {'add', 'mul', 'relu', 'reshape', 'batchnormalization', 'maxpool', 'softmax', 'gemm', 'averagepool', 'unsqueeze', 'conv', 'concat'} | {} | succeeded |
inception-v2-9.onnx | {'add', 'mul', 'relu', 'reshape', 'batchnormalization', 'maxpool', 'softmax', 'gemm', 'averagepool', 'unsqueeze', 'conv', 'concat'} | {} | succeeded |
maskrcnn-10.onnx | {'topk', 'sqrt', 'sub', 'squeeze', 'log', 'not', 'less', 'gather', 'roialign', 'resize', 'shape', 'scatter', 'transpose', 'concat', 'cast', 'clip', 'add', 'greater', 'relu', 'softmax', 'unsqueeze', 'gemm', 'constant', 'and', 'exp', 'reducemin', 'convtranspose', 'constantofshape', 'split', 'nonzero', 'equal', 'conv', 'flatten', 'expand', 'mul', 'reshape', 'floor', 'maxpool', 'sigmoid', 'slice', 'div', 'nonmaxsuppression'} | {'topk', 'greater', 'expand', 'not', 'roialign', 'nonzero', 'equal', 'convtranspose', 'scatter', 'nonmaxsuppression'} | error: scales() and sizes() can not both None/not None error: shape inference failed |
mnist-1.onnx | {'add', 'reshape', 'relu', 'matmul', 'maxpool', 'div', 'conv', 'constant'} | {} | error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
mnist-7.onnx | {'add', 'relu', 'reshape', 'matmul', 'maxpool', 'conv'} | {} | succeeded |
mnist-8.onnx | {'add', 'relu', 'reshape', 'matmul', 'maxpool', 'conv'} | {} | succeeded |
mobilenetv2-7.onnx | {'clip', 'add', 'reshape', 'constant', 'gather', 'gemm', 'unsqueeze', 'conv', 'shape', 'globalaveragepool', 'concat'} | {} | error: Expected positive number of original loops. |
mosaic-8.onnx | {'add', 'relu', 'instancenormalization', 'upsample', 'pad', 'conv'} | {'instancenormalization', 'upsample'} | error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
mosaic-9.onnx | {'cast', 'add', 'mul', 'relu', 'floor', 'instancenormalization', 'shape', 'gather', 'upsample', 'slice', 'unsqueeze', 'div', 'pad', 'constant', 'conv', 'concat'} | {'instancenormalization', 'upsample'} | error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
pointilism-8.onnx | {'add', 'relu', 'instancenormalization', 'upsample', 'pad', 'conv'} | {'instancenormalization', 'upsample'} | error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
pointilism-9.onnx | {'cast', 'add', 'mul', 'relu', 'floor', 'instancenormalization', 'shape', 'gather', 'upsample', 'slice', 'unsqueeze', 'div', 'pad', 'constant', 'conv', 'concat'} | {'instancenormalization', 'upsample'} | error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
rain-princess-8.onnx | {'add', 'relu', 'instancenormalization', 'upsample', 'pad', 'conv'} | {'instancenormalization', 'upsample'} | error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
rain-princess-9.onnx | {'cast', 'add', 'mul', 'relu', 'floor', 'instancenormalization', 'shape', 'gather', 'upsample', 'slice', 'unsqueeze', 'div', 'pad', 'constant', 'conv', 'concat'} | {'instancenormalization', 'upsample'} | error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
rcnn-ilsvrc13-3.onnx | {'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'conv'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
rcnn-ilsvrc13-6.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'conv'} | {} | succeeded |
rcnn-ilsvrc13-7.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'conv'} | {} | succeeded |
rcnn-ilsvrc13-8.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'conv'} | {} | succeeded |
rcnn-ilsvrc13-9.onnx | {'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'conv'} | {} | succeeded |
resnet101-duc-7.onnx | {'relu', 'reshape', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'conv'} | {} | succeeded |
resnet101-v1-7.onnx | {'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
resnet101-v2-7.onnx | {'add', 'relu', 'reshape', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
resnet152-v1-7.onnx | {'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
resnet152-v2-7.onnx | {'add', 'relu', 'reshape', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
resnet18-v1-7.onnx | {'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
resnet18-v2-7.onnx | {'add', 'relu', 'reshape', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
resnet34-v1-7.onnx | {'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
resnet34-v2-7.onnx | {'add', 'relu', 'reshape', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
resnet50-caffe2-v1-3.onnx | {'relu', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'gemm', 'averagepool', 'conv'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
resnet50-caffe2-v1-6.onnx | {'relu', 'reshape', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'gemm', 'averagepool', 'conv'} | {} | succeeded |
resnet50-caffe2-v1-7.onnx | {'relu', 'reshape', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'gemm', 'averagepool', 'conv'} | {} | succeeded |
resnet50-caffe2-v1-8.onnx | {'relu', 'reshape', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'gemm', 'averagepool', 'conv'} | {} | succeeded |
resnet50-caffe2-v1-9.onnx | {'relu', 'reshape', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'gemm', 'averagepool', 'conv'} | {} | succeeded |
resnet50-v1-12-int8.onnx | {'flatten', 'qlinearglobalaveragepool', 'maxpool', 'dequantizelinear', 'quantizelinear', 'qlinearconv', 'qlinearadd', 'qlinearmatmul'} | {'qlinearglobalaveragepool', 'dequantizelinear', 'quantizelinear', 'qlinearconv', 'qlinearadd', 'qlinearmatmul'} | error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked |
resnet50-v1-12.onnx | {'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
resnet50-v1-7.onnx | {'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
resnet50-v2-7.onnx | {'add', 'relu', 'reshape', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'} | {} | succeeded |
retinanet-9.onnx | {'add', 'relu', 'maxpool', 'batchnormalization', 'sigmoid', 'upsample', 'conv'} | {'upsample'} | error: onnx.Upsample: inferShapes() not implemented error: shape inference failed |
roberta-base-11.onnx | {'sqrt', 'sub', 'cumsum', 'not', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'softmax', 'unsqueeze', 'gemm', 'constant', 'constantofshape', 'pow', 'erf', 'equal', 'reducemean', 'mul', 'reshape', 'div'} | {'cumsum', 'equal', 'not'} | error: onnx.Equal: inferShapes() not implemented error: shape inference failed |
roberta-sequence-classification-9.onnx | {'sqrt', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'softmax', 'unsqueeze', 'gemm', 'constant', 'constantofshape', 'pow', 'erf', 'nonzero', 'reducemean', 'expand', 'mul', 'reshape', 'div'} | {'nonzero', 'expand'} | error: onnx.NonZero: inferShapes() not implemented error: shape inference failed |
shufflenet-3.onnx | {'reshape', 'relu', 'maxpool', 'batchnormalization', 'sum', 'softmax', 'gemm', 'averagepool', 'conv', 'transpose', 'concat'} | {} | error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
shufflenet-6.onnx | {'reshape', 'relu', 'maxpool', 'batchnormalization', 'sum', 'softmax', 'gemm', 'averagepool', 'conv', 'transpose', 'concat'} | {} | succeeded |
shufflenet-7.onnx | {'reshape', 'relu', 'maxpool', 'batchnormalization', 'sum', 'softmax', 'gemm', 'averagepool', 'conv', 'transpose', 'concat'} | {} | succeeded |
shufflenet-8.onnx | {'reshape', 'relu', 'maxpool', 'batchnormalization', 'sum', 'softmax', 'gemm', 'averagepool', 'conv', 'transpose', 'concat'} | {} | succeeded |
shufflenet-9.onnx | {'reshape', 'relu', 'maxpool', 'batchnormalization', 'sum', 'softmax', 'gemm', 'averagepool', 'conv', 'transpose', 'concat'} | {} | succeeded |
shufflenet-v2-10.onnx | {'relu', 'reshape', 'reducemean', 'maxpool', 'batchnormalization', 'split', 'gemm', 'conv', 'constant', 'transpose', 'concat'} | {} | succeeded |
squeezenet1.0-3.onnx | {'relu', 'dropout', 'maxpool', 'softmax', 'conv', 'globalaveragepool', 'concat'} | {} | succeeded |
squeezenet1.0-6.onnx | {'relu', 'dropout', 'maxpool', 'softmax', 'conv', 'globalaveragepool', 'concat'} | {} | succeeded |
squeezenet1.0-7.onnx | {'relu', 'dropout', 'maxpool', 'softmax', 'conv', 'globalaveragepool', 'concat'} | {} | succeeded |
squeezenet1.0-8.onnx | {'relu', 'dropout', 'maxpool', 'softmax', 'conv', 'globalaveragepool', 'concat'} | {} | succeeded |
squeezenet1.0-9.onnx | {'relu', 'dropout', 'maxpool', 'softmax', 'conv', 'globalaveragepool', 'concat'} | {} | succeeded |
squeezenet1.1-7.onnx | {'relu', 'dropout', 'reshape', 'maxpool', 'averagepool', 'conv', 'concat'} | {} | succeeded |
ssd-10.onnx | {'topk', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'relu', 'batchnormalization', 'softmax', 'unsqueeze', 'constant', 'exp', 'reducemin', 'constantofshape', 'conv', 'mul', 'reshape', 'maxpool', 'slice', 'nonmaxsuppression'} | {'topk', 'nonmaxsuppression'} | error: onnx.NonMaxSuppression: inferShapes() not implemented error: shape inference failed |
ssd_mobilenet_v1_10.onnx | {'sub', 'squeeze', 'less', 'gather', 'shape', 'loop', 'concat', 'transpose', 'cast', 'clip', 'add', 'unsqueeze', 'exp', 'constantofshape', 'tile', 'split', 'conv', 'mul', 'reshape', 'sigmoid', 'slice', 'div', 'min'} | {} | error: scales() and sizes() can not both None/not None error: shape inference failed error: onnx.Equal: inferShapes() not implemented error: shape inference failed onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:245: U mlir::Type::cast() const [with U = mlir::MemRefType]: Assertion `isa()' failed. |
super-resolution-10.onnx | {'reshape', 'relu', 'conv', 'constant', 'transpose'} | {} | succeeded |
t5-decoder-with-lm-head-12.onnx | {'range', 'sqrt', 'sub', 'log', 'less', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'max', 'relu', 'matmul', 'softmax', 'unsqueeze', 'constant', 'constantofshape', 'pow', 'lessorequal', 'tile', 'neg', 'where', 'reducemean', 'mul', 'reshape', 'div', 'min'} | {'where', 'lessorequal'} | error: onnx.LessOrEqual: inferShapes() not implemented error: shape inference failed |
t5-encoder-12.onnx | {'sqrt', 'range', 'sub', 'log', 'less', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'relu', 'matmul', 'softmax', 'unsqueeze', 'constant', 'constantofshape', 'pow', 'neg', 'where', 'abs', 'reducemean', 'mul', 'reshape', 'div', 'min'} | {'where'} | error: onnx.Where: inferShapes() not implemented error: shape inference failed |
tiny-yolov3-11.onnx | {'sub', 'squeeze', 'leakyrelu', 'resize', 'shape', 'transpose', 'concat', 'loop', 'cast', 'add', 'batchnormalization', 'unsqueeze', 'exp', 'reducemin', 'tile', 'identity', 'round', 'conv', 'mul', 'reshape', 'maxpool', 'sigmoid', 'ceil', 'slice', 'div', 'nonmaxsuppression'} | {'round', 'nonmaxsuppression'} | error: onnx.Round: inferShapes() not implemented error: shape inference failed |
tinyyolov2-7.onnx | {'add', 'mul', 'batchnormalization', 'maxpool', 'leakyrelu', 'conv'} | {} | succeeded |
tinyyolov2-8.onnx | {'add', 'mul', 'batchnormalization', 'maxpool', 'leakyrelu', 'conv'} | {} | succeeded |
udnie-8.onnx | {'add', 'relu', 'instancenormalization', 'upsample', 'pad', 'conv'} | {'instancenormalization', 'upsample'} | error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
udnie-9.onnx | {'cast', 'add', 'mul', 'relu', 'floor', 'instancenormalization', 'shape', 'gather', 'upsample', 'slice', 'unsqueeze', 'div', 'pad', 'constant', 'conv', 'concat'} | {'instancenormalization', 'upsample'} | error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
version-rfb-320.onnx | {'add', 'sub', 'mul', 'relu', 'reshape', 'batchnormalization', 'shape', 'gather', 'slice', 'softmax', 'unsqueeze', 'div', 'conv', 'constant', 'exp', 'transpose', 'concat'} | {} | error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. |
version-rfb-640.onnx | {'add', 'sub', 'mul', 'relu', 'reshape', 'constant', 'batchnormalization', 'gather', 'slice', 'softmax', 'unsqueeze', 'div', 'conv', 'shape', 'exp', 'transpose', 'concat'} | {} | error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. error: Expected positive number of original loops. |
vgg16-7.onnx | {'flatten', 'relu', 'dropout', 'maxpool', 'gemm', 'conv'} | {} | succeeded |
vgg16-bn-7.onnx | {'flatten', 'relu', 'dropout', 'maxpool', 'batchnormalization', 'gemm', 'conv'} | {} | succeeded |
vgg19-7.onnx | {'flatten', 'relu', 'dropout', 'maxpool', 'gemm', 'conv'} | {} | succeeded |
vgg19-bn-7.onnx | {'flatten', 'relu', 'dropout', 'maxpool', 'batchnormalization', 'gemm', 'conv'} | {} | succeeded |
vgg19-caffe2-3.onnx | {'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
vgg19-caffe2-6.onnx | {'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
vgg19-caffe2-7.onnx | {'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
vgg19-caffe2-8.onnx | {'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
vgg19-caffe2-9.onnx | {'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
vgg_ilsvrc_16_age_chalearn_iccv2015.onnx | {'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
vgg_ilsvrc_16_age_imdb_wiki.onnx | {'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
vgg_ilsvrc_16_gender_imdb_wiki.onnx | {'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
yolov2-coco-9.onnx | {'reshape', 'maxpool', 'batchnormalization', 'leakyrelu', 'conv', 'constant', 'transpose', 'concat'} | {} | succeeded |
yolov3-10.onnx | {'sub', 'squeeze', 'gather', 'leakyrelu', 'resize', 'shape', 'loop', 'transpose', 'concat', 'cast', 'add', 'batchnormalization', 'unsqueeze', 'exp', 'reducemin', 'tile', 'conv', 'mul', 'reshape', 'sigmoid', 'ceil', 'slice', 'div', 'nonmaxsuppression'} | {'nonmaxsuppression'} | error: scales() and sizes() can not both None/not None error: shape inference failed |
yolov4.onnx | {'cast', 'add', 'mul', 'reshape', 'log', 'maxpool', 'sigmoid', 'tanh', 'gather', 'split', 'slice', 'leakyrelu', 'resize', 'conv', 'shape', 'exp', 'transpose', 'concat'} | {} | succeeded |
zfnet512-3.onnx | {'relu', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
zfnet512-6.onnx | {'reshape', 'relu', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
zfnet512-7.onnx | {'reshape', 'relu', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
zfnet512-8.onnx | {'reshape', 'relu', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
zfnet512-9.onnx | {'reshape', 'relu', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'} | {} | succeeded |
Looks like ONNX-MLIR supports 103 models, where 83 models can be really compiled.
Operator name | Count | Supported in onnx-mlir |
---|---|---|
conv | 119 | supported |
relu | 111 | supported |
maxpool | 101 | supported |
reshape | 84 | supported |
gemm | 79 | supported |
softmax | 71 | supported |
add | 63 | supported |
concat | 61 | supported |
dropout | 50 | supported |
batchnormalization | 46 | supported |
mul | 36 | supported |
averagepool | 34 | supported |
unsqueeze | 32 | supported |
lrn | 32 | supported |
transpose | 27 | supported |
shape | 26 | supported |
gather | 25 | supported |
constant | 24 | supported |
cast | 23 | supported |
globalaveragepool | 22 | supported |
div | 22 | supported |
sub | 21 | supported |
slice | 21 | supported |
matmul | 16 | supported |
flatten | 14 | supported |
squeeze | 13 | supported |
constantofshape | 12 | supported |
sum | 12 | supported |
upsample | 11 | ADDED (deprecated in 10) |
sqrt | 10 | supported |
instancenormalization | 10 | ADDED |
pad | 10 | supported |
reducemean | 9 | supported |
exp | 9 | supported |
pow | 8 | supported |
split | 8 | supported |
sigmoid | 8 | supported |
resize | 7 | supported |
floor | 7 | supported |
tanh | 7 | supported |
log | 6 | supported |
leakyrelu | 6 | supported |
clip | 6 | supported |
reducemin | 5 | supported |
tile | 5 | supported |
nonzero | 5 | ADDED |
nonmaxsuppression | 5 | not supported |
identity | 4 | supported |
less | 4 | supported |
topk | 3 | not supported |
loop | 3 | supported |
equal | 3 | ADDED |
expand | 3 | not supported (priority 2) |
where | 3 | not supported |
ceil | 3 | supported |
min | 3 | supported |
roialign | 2 | not supported |
reciprocal | 2 | supported |
neg | 2 | supported |
erf | 2 | supported |
abs | 2 | supported |
range | 2 | supported |
not | 2 | ADDED |
scatter | 2 | not supported |
greater | 2 | ADDED |
cumsum | 1 | not supported (priority 2) |
max | 1 | supported |
categorymapper | 1 | not supported |
onehot | 1 | not supported |
and | 1 | supported |
qlinearconv | 1 | not supported |
argmax | 1 | supported |
lessorequal | 1 | ADDED |
qlinearglobalaveragepool | 1 | not supported |
round | 1 | not supported |
prelu | 1 | supported |
scan | 1 | supported |
lstm | 1 | supported |
quantizelinear | 1 | not supported |
reducemax | 1 | supported |
qlinearadd | 1 | not supported |
compress | 1 | not supported |
dequantizelinear | 1 | not supported |
hardmax | 1 | not supported |
convtranspose | 1 | not supported |
qlinearmatmul | 1 | not supported |
reducesum | 1 | supported |
ALEX: I modified the text manually to add the newly supported ops and comment on deprecated op. Priority 2 ops not listed here (compress, mean, mod, SpaceToDepth, Random*) Tung: [Oct. 5] Updated Upsample, NonZero
Big thanks @tungld
Just check some old models to see why Gemm failed. Actually these models seemed incorrect, for example, the output of MaxPooling (4D tensors) was passed to Gemm which supported only 2D, so Gemm failed.
Looking at onnx/models, these old models will be removed by this PR: https://github.com/onnx/models/pull/389. So, we perhaps don't need to care about these old models.
New update: 101 models can be compiled now (it was 83 in the previous update). Out of 17 models failed to compile, 12 models are deprecated (using Opset <=3).
Some models need to be compiled with --repeatOnnxTransform=1
so that all tensors are ranked.
['abs', 'acos', 'acosh', 'add', 'and', 'argmax', 'asin', 'asinh', 'atan', 'atanh', 'averagepool', 'batchnormalization', 'cast', 'ceil', 'clip', 'concat', 'constant', 'constantofshape', 'conv', 'cos', 'div', 'dropout', 'elu', 'equal', 'erf', 'exp', 'flatten', 'floor', 'gather', 'gemm', 'globalaveragepool', 'globalmaxpool', 'greater', 'greaterorequal', 'gru', 'hardsigmoid', 'identity', 'instancenormalization', 'leakyrelu', 'less', 'lessorequal', 'log', 'logsoftmax', 'loop', 'lrn', 'lstm', 'matmul', 'max', 'maxpool', 'mean', 'min', 'mod', 'mul', 'neg', 'nonzero', 'not', 'or', 'pad', 'pow', 'prelu', 'range', 'reciprocal', 'reducel1', 'reducel2', 'reducelogsum', 'reducelogsumexp', 'reducemax', 'reducemean', 'reducemin', 'reduceprod', 'reducesum', 'reducesumsquare', 'relu', 'reshape', 'resize', 'rnn', 'round', 'scan', 'selu', 'shape', 'sigmoid', 'sign', 'sin', 'sinh', 'size', 'slice', 'softmax', 'softplus', 'softsign', 'split', 'sqrt', 'squeeze', 'sub', 'sum', 'tan', 'tanh', 'tile', 'transpose', 'unsqueeze', 'upsample', 'where', 'xor']
See https://github.com/onnx/models/pull/389 for a list of deprecated models
ONNX model | Ops in the model | Ops not supported in onnx-mlir | Compilable with onnx-mlir |
---|---|---|---|
age_googlenet.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
arcfaceresnet100-8.onnx | {'prelu', 'conv', 'flatten', 'reshape', 'identity', 'mul', 'batchnormalization', 'sub', 'gemm', 'dropout', 'add'} | {} | succeeded |
bertsquad-10.onnx | {'unsqueeze', 'split', 'constantofshape', 'onehot', 'sub', 'softmax', 'matmul', 'identity', 'mul', 'pow', 'gather', 'transpose', 'shape', 'reshape', 'reducemean', 'squeeze', 'sqrt', 'tanh', 'concat', 'slice', 'cast', 'reciprocal', 'add'} | {'onehot'} | error: onnx.OneHot: inferShapes() not implemented error: shape inference failed |
bertsquad-8.onnx ['--repeatOnnxTransform=1'] | {'unsqueeze', 'split', 'sub', 'tile', 'softmax', 'matmul', 'identity', 'mul', 'pow', 'gather', 'transpose', 'shape', 'reshape', 'reducemean', 'squeeze', 'sqrt', 'tanh', 'concat', 'slice', 'cast', 'reciprocal', 'add'} | {} | succeeded |
bidaf-9.onnx | {'unsqueeze', 'constantofshape', 'compress', 'sigmoid', 'sub', 'add', 'categorymapper', 'sum', 'softmax', 'matmul', 'mul', 'dropout', 'reducemax', 'gather', 'transpose', 'shape', 'hardmax', 'reshape', 'reducesum', 'squeeze', 'relu', 'scan', 'clip', 'abs', 'conv', 'concat', 'slice', 'argmax', 'cast', 'log', 'ceil', 'lstm'} | {'hardmax', 'categorymapper', 'compress'} | onnx-mlir: /home/tungld/dl/onnx-mlir/src/Builder/SymbolTable.hpp:126: void onnx_mlir::SymbolMapping |
bvlcalexnet-3.onnx (deprecated) | {'conv', 'softmax', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
bvlcalexnet-6.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
bvlcalexnet-7.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
bvlcalexnet-8.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
bvlcalexnet-9.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
caffenet-3.onnx (deprecated) | {'conv', 'softmax', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
caffenet-6.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
caffenet-7.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
caffenet-8.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
caffenet-9.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
candy-8.onnx | {'conv', 'pad', 'upsample', 'relu', 'instancenormalization', 'add'} | {} | succeeded |
candy-9.onnx | {'unsqueeze', 'floor', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'pad', 'cast', 'div', 'relu', 'upsample', 'mul', 'instancenormalization', 'add'} | {} | succeeded |
densenet-3.onnx (deprecated) | {'averagepool', 'conv', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'globalaveragepool', 'add'} | {} | onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed. |
densenet-6.onnx | {'unsqueeze', 'averagepool', 'conv', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'globalaveragepool', 'add'} | {} | succeeded |
densenet-7.onnx | {'unsqueeze', 'averagepool', 'conv', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'globalaveragepool', 'add'} | {} | succeeded |
densenet-8.onnx | {'unsqueeze', 'averagepool', 'conv', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'globalaveragepool', 'add'} | {} | succeeded |
densenet-9.onnx | {'unsqueeze', 'averagepool', 'conv', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'globalaveragepool', 'add'} | {} | succeeded |
efficientnet-lite4-11.onnx | {'clip', 'averagepool', 'transpose', 'conv', 'softmax', 'squeeze', 'matmul', 'batchnormalization', 'add'} | {} | succeeded |
emotion-ferplus-2.onnx (deprecated) | {'conv', 'constant', 'reshape', 'maxpool', 'div', 'matmul', 'relu', 'sub', 'dropout', 'add'} | {} | error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
emotion-ferplus-7.onnx | {'conv', 'reshape', 'maxpool', 'div', 'matmul', 'relu', 'sub', 'dropout', 'add'} | {} | succeeded |
emotion-ferplus-8.onnx | {'conv', 'reshape', 'maxpool', 'div', 'matmul', 'relu', 'sub', 'dropout', 'add'} | {} | succeeded |
fasterrcnn-10.onnx | {'unsqueeze', 'expand', 'constantofshape', 'constant', 'div', 'sigmoid', 'sub', 'roialign', 'exp', 'nonmaxsuppression', 'softmax', 'maxpool', 'mul', 'topk', 'equal', 'floor', 'gather', 'transpose', 'shape', 'flatten', 'reshape', 'squeeze', 'relu', 'sqrt', 'clip', 'scatter', 'conv', 'greater', 'concat', 'slice', 'cast', 'reducemin', 'log', 'gemm', 'resize', 'add', 'nonzero'} | {'expand', 'scatter', 'nonmaxsuppression', 'roialign', 'topk'} | error: scales() and sizes() can not both None/not None error: shape inference failed |
fcn-resnet101-11.onnx | {'unsqueeze', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'maxpool', 'cast', 'relu', 'resize', 'add'} | {} | error: these modes() or coordinate_transformation_mode() not implemented yet error: shape inference failed |
fcn-resnet50-11.onnx | {'unsqueeze', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'maxpool', 'cast', 'relu', 'resize', 'add'} | {} | error: these modes() or coordinate_transformation_mode() not implemented yet error: shape inference failed |
gender_googlenet.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
googlenet-3.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
googlenet-6.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
googlenet-7.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
googlenet-8.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
googlenet-9.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
gpt2-10.onnx ['--repeatOnnxTransform=1'] | {'unsqueeze', 'split', 'constantofshape', 'constant', 'div', 'sub', 'softmax', 'matmul', 'mul', 'pow', 'gather', 'transpose', 'shape', 'reshape', 'reducemean', 'squeeze', 'sqrt', 'tanh', 'concat', 'slice', 'cast', 'gemm', 'add', 'nonzero'} | {} | succeeded |
gpt2-lm-head-10.onnx ['--repeatOnnxTransform=1'] | {'unsqueeze', 'split', 'where', 'constantofshape', 'constant', 'div', 'sub', 'softmax', 'matmul', 'mul', 'pow', 'gather', 'transpose', 'shape', 'reshape', 'reducemean', 'squeeze', 'sqrt', 'tanh', 'concat', 'slice', 'cast', 'gemm', 'add', 'nonzero'} | {} | loc("onnx.Cast"): error: 'std.trunci' op operand #0 must be signless-integer-like, but got 'ui8' |
inception-v1-3.onnx (deprecated) | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
inception-v1-6.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
inception-v1-7.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
inception-v1-8.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
inception-v1-9.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
inception-v2-3.onnx (deprecated) | {'averagepool', 'conv', 'softmax', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'gemm', 'add'} | {} | onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed. |
inception-v2-6.onnx | {'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'relu', 'mul', 'batchnormalization', 'gemm', 'add'} | {} | onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed. |
inception-v2-7.onnx | {'unsqueeze', 'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'relu', 'mul', 'batchnormalization', 'gemm', 'add'} | {} | succeeded |
inception-v2-8.onnx | {'unsqueeze', 'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'relu', 'mul', 'batchnormalization', 'gemm', 'add'} | {} | succeeded |
inception-v2-9.onnx | {'unsqueeze', 'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'relu', 'mul', 'batchnormalization', 'gemm', 'add'} | {} | succeeded |
maskrcnn-10.onnx | {'unsqueeze', 'expand', 'split', 'constantofshape', 'constant', 'div', 'sigmoid', 'and', 'sub', 'roialign', 'exp', 'nonmaxsuppression', 'softmax', 'less', 'maxpool', 'mul', 'topk', 'equal', 'floor', 'gather', 'transpose', 'shape', 'flatten', 'reshape', 'convtranspose', 'squeeze', 'relu', 'sqrt', 'clip', 'not', 'scatter', 'conv', 'greater', 'concat', 'slice', 'cast', 'reducemin', 'log', 'gemm', 'resize', 'add', 'nonzero'} | {'expand', 'scatter', 'nonmaxsuppression', 'roialign', 'convtranspose', 'topk'} | error: scales() and sizes() can not both None/not None error: shape inference failed |
mnist-1.onnx (deprecated) | {'conv', 'constant', 'reshape', 'maxpool', 'div', 'matmul', 'relu', 'add'} | {} | error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
mnist-7.onnx | {'conv', 'reshape', 'maxpool', 'matmul', 'relu', 'add'} | {} | succeeded |
mnist-8.onnx | {'conv', 'reshape', 'maxpool', 'matmul', 'relu', 'add'} | {} | succeeded |
mobilenetv2-7.onnx | {'unsqueeze', 'clip', 'gather', 'conv', 'shape', 'concat', 'constant', 'reshape', 'gemm', 'globalaveragepool', 'add'} | {} | succeeded |
mosaic-8.onnx | {'conv', 'pad', 'upsample', 'relu', 'instancenormalization', 'add'} | {} | succeeded |
mosaic-9.onnx | {'unsqueeze', 'floor', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'pad', 'cast', 'div', 'relu', 'upsample', 'mul', 'instancenormalization', 'add'} | {} | succeeded |
pointilism-8.onnx | {'conv', 'pad', 'upsample', 'relu', 'instancenormalization', 'add'} | {} | succeeded |
pointilism-9.onnx | {'unsqueeze', 'floor', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'pad', 'cast', 'div', 'relu', 'upsample', 'mul', 'instancenormalization', 'add'} | {} | succeeded |
rain-princess-8.onnx | {'conv', 'pad', 'upsample', 'relu', 'instancenormalization', 'add'} | {} | succeeded |
rain-princess-9.onnx | {'unsqueeze', 'floor', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'pad', 'cast', 'div', 'relu', 'upsample', 'mul', 'instancenormalization', 'add'} | {} | succeeded |
rcnn-ilsvrc13-3.onnx (deprecated) | {'conv', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
rcnn-ilsvrc13-6.onnx | {'conv', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
rcnn-ilsvrc13-7.onnx | {'conv', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
rcnn-ilsvrc13-8.onnx | {'conv', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
rcnn-ilsvrc13-9.onnx | {'conv', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'} | {} | succeeded |
resnet101-duc-7.onnx | {'sum', 'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization'} | {} | succeeded |
resnet101-v1-7.onnx | {'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
resnet101-v2-7.onnx | {'conv', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
resnet152-v1-7.onnx | {'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
resnet152-v2-7.onnx | {'conv', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
resnet18-v1-7.onnx | {'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
resnet18-v2-7.onnx | {'conv', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
resnet34-v1-7.onnx | {'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
resnet34-v2-7.onnx | {'conv', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
resnet50-caffe2-v1-3.onnx (deprecated) | {'averagepool', 'sum', 'conv', 'softmax', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
resnet50-caffe2-v1-6.onnx | {'averagepool', 'sum', 'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | succeeded |
resnet50-caffe2-v1-7.onnx | {'averagepool', 'sum', 'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | succeeded |
resnet50-caffe2-v1-8.onnx | {'averagepool', 'sum', 'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | succeeded |
resnet50-caffe2-v1-9.onnx | {'averagepool', 'sum', 'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | succeeded |
resnet50-v1-12-int8.onnx | {'qlinearglobalaveragepool', 'qlinearmatmul', 'qlinearadd', 'flatten', 'qlinearconv', 'maxpool', 'dequantizelinear', 'quantizelinear'} | {'qlinearglobalaveragepool', 'qlinearmatmul', 'qlinearadd', 'qlinearconv', 'dequantizelinear', 'quantizelinear'} | error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked |
resnet50-v1-12.onnx | {'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
resnet50-v1-7.onnx | {'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
resnet50-v2-7.onnx | {'conv', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'} | {} | succeeded |
retinanet-9.onnx | {'conv', 'maxpool', 'upsample', 'sigmoid', 'relu', 'batchnormalization', 'add'} | {} | succeeded |
roberta-base-11.onnx | {'unsqueeze', 'constantofshape', 'constant', 'div', 'erf', 'sub', 'softmax', 'matmul', 'mul', 'pow', 'equal', 'gather', 'transpose', 'shape', 'reducemean', 'reshape', 'sqrt', 'tanh', 'not', 'concat', 'cumsum', 'cast', 'gemm', 'add'} | {'cumsum'} | error: onnx.CumSum: inferShapes() not implemented error: shape inference failed |
roberta-sequence-classification-9.onnx | {'unsqueeze', 'expand', 'constantofshape', 'constant', 'div', 'erf', 'sub', 'softmax', 'matmul', 'mul', 'pow', 'gather', 'transpose', 'shape', 'reducemean', 'reshape', 'squeeze', 'sqrt', 'tanh', 'concat', 'cast', 'gemm', 'add', 'nonzero'} | {'expand'} | error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked |
shufflenet-3.onnx (deprecated) | {'averagepool', 'transpose', 'conv', 'sum', 'concat', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none' |
shufflenet-6.onnx | {'averagepool', 'transpose', 'conv', 'sum', 'concat', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | succeeded |
shufflenet-7.onnx | {'averagepool', 'transpose', 'conv', 'sum', 'concat', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | succeeded |
shufflenet-8.onnx | {'averagepool', 'transpose', 'conv', 'sum', 'concat', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | succeeded |
shufflenet-9.onnx | {'averagepool', 'transpose', 'conv', 'sum', 'concat', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | succeeded |
shufflenet-v2-10.onnx | {'split', 'transpose', 'conv', 'concat', 'constant', 'reshape', 'reducemean', 'maxpool', 'relu', 'batchnormalization', 'gemm'} | {} | succeeded |
squeezenet1.0-3.onnx | {'conv', 'softmax', 'concat', 'maxpool', 'relu', 'dropout', 'globalaveragepool'} | {} | succeeded |
squeezenet1.0-6.onnx | {'conv', 'softmax', 'concat', 'maxpool', 'relu', 'dropout', 'globalaveragepool'} | {} | succeeded |
squeezenet1.0-7.onnx | {'conv', 'softmax', 'concat', 'maxpool', 'relu', 'dropout', 'globalaveragepool'} | {} | succeeded |
squeezenet1.0-8.onnx | {'conv', 'softmax', 'concat', 'maxpool', 'relu', 'dropout', 'globalaveragepool'} | {} | succeeded |
squeezenet1.0-9.onnx | {'conv', 'softmax', 'concat', 'maxpool', 'relu', 'dropout', 'globalaveragepool'} | {} | succeeded |
squeezenet1.1-7.onnx | {'averagepool', 'conv', 'concat', 'reshape', 'maxpool', 'relu', 'dropout'} | {} | succeeded |
ssd-10.onnx | {'unsqueeze', 'constantofshape', 'constant', 'batchnormalization', 'sub', 'exp', 'softmax', 'nonmaxsuppression', 'maxpool', 'mul', 'topk', 'gather', 'transpose', 'shape', 'reshape', 'squeeze', 'relu', 'conv', 'concat', 'slice', 'cast', 'reducemin', 'add'} | {'nonmaxsuppression', 'topk'} | error: onnx.NonMaxSuppression: inferShapes() not implemented error: shape inference failed |
ssd_mobilenet_v1_10.onnx | {'unsqueeze', 'split', 'constantofshape', 'div', 'sigmoid', 'sub', 'min', 'tile', 'loop', 'exp', 'less', 'mul', 'gather', 'transpose', 'shape', 'reshape', 'squeeze', 'clip', 'conv', 'concat', 'slice', 'cast', 'add'} | {} | error: scales() and sizes() can not both None/not None error: shape inference failed error: onnx.NonMaxSuppression: inferShapes() not implemented error: shape inference failed onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:245: U mlir::Type::cast() const [with U = mlir::MemRefType]: Assertion `isa()' failed. |
super-resolution-10.onnx | {'transpose', 'conv', 'constant', 'reshape', 'relu'} | {} | succeeded |
t5-decoder-with-lm-head-12.onnx | {'unsqueeze', 'where', 'constantofshape', 'constant', 'div', 'max', 'sub', 'min', 'tile', 'softmax', 'lessorequal', 'less', 'matmul', 'mul', 'pow', 'gather', 'transpose', 'shape', 'range', 'reshape', 'reducemean', 'relu', 'sqrt', 'neg', 'concat', 'cast', 'log', 'add'} | {} | succeeded |
t5-encoder-12.onnx ['--repeatOnnxTransform=1'] | {'unsqueeze', 'where', 'constantofshape', 'constant', 'div', 'sub', 'min', 'softmax', 'less', 'matmul', 'mul', 'pow', 'gather', 'transpose', 'shape', 'range', 'reshape', 'reducemean', 'relu', 'sqrt', 'neg', 'abs', 'concat', 'cast', 'log', 'add'} | {} | succeeded |
tiny-yolov3-11.onnx | {'unsqueeze', 'round', 'leakyrelu', 'div', 'sigmoid', 'batchnormalization', 'sub', 'tile', 'loop', 'exp', 'nonmaxsuppression', 'maxpool', 'identity', 'mul', 'transpose', 'shape', 'reshape', 'squeeze', 'conv', 'concat', 'slice', 'cast', 'reducemin', 'ceil', 'resize', 'add'} | {'nonmaxsuppression'} | error: onnx.NonMaxSuppression: inferShapes() not implemented error: shape inference failed |
tinyyolov2-7.onnx | {'conv', 'leakyrelu', 'maxpool', 'mul', 'batchnormalization', 'add'} | {} | succeeded |
tinyyolov2-8.onnx | {'conv', 'leakyrelu', 'maxpool', 'mul', 'batchnormalization', 'add'} | {} | succeeded |
udnie-8.onnx | {'conv', 'pad', 'upsample', 'relu', 'instancenormalization', 'add'} | {} | succeeded |
udnie-9.onnx | {'unsqueeze', 'floor', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'pad', 'cast', 'div', 'relu', 'upsample', 'mul', 'instancenormalization', 'add'} | {} | succeeded |
version-rfb-320.onnx | {'unsqueeze', 'exp', 'transpose', 'conv', 'shape', 'concat', 'gather', 'softmax', 'constant', 'reshape', 'slice', 'div', 'relu', 'mul', 'batchnormalization', 'sub', 'add'} | {} | succeeded |
version-rfb-640.onnx | {'unsqueeze', 'exp', 'transpose', 'conv', 'shape', 'concat', 'gather', 'softmax', 'constant', 'reshape', 'slice', 'div', 'relu', 'mul', 'batchnormalization', 'sub', 'add'} | {} | succeeded |
vgg16-7.onnx | {'conv', 'flatten', 'maxpool', 'relu', 'gemm', 'dropout'} | {} | succeeded |
vgg16-bn-7.onnx | {'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'gemm', 'dropout'} | {} | succeeded |
vgg19-7.onnx | {'conv', 'flatten', 'maxpool', 'relu', 'gemm', 'dropout'} | {} | succeeded |
vgg19-bn-7.onnx | {'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'gemm', 'dropout'} | {} | succeeded |
vgg19-caffe2-3.onnx (deprecated) | {'conv', 'softmax', 'maxpool', 'relu', 'gemm', 'dropout'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
vgg19-caffe2-6.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'} | {} | succeeded |
vgg19-caffe2-7.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'} | {} | succeeded |
vgg19-caffe2-8.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'} | {} | succeeded |
vgg19-caffe2-9.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'} | {} | succeeded |
vgg_ilsvrc_16_age_chalearn_iccv2015.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'} | {} | succeeded |
vgg_ilsvrc_16_age_imdb_wiki.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'} | {} | succeeded |
vgg_ilsvrc_16_gender_imdb_wiki.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'} | {} | succeeded |
yolov2-coco-9.onnx | {'transpose', 'conv', 'leakyrelu', 'concat', 'constant', 'reshape', 'maxpool', 'batchnormalization'} | {} | succeeded |
yolov3-10.onnx | {'unsqueeze', 'leakyrelu', 'div', 'sigmoid', 'batchnormalization', 'sub', 'tile', 'loop', 'exp', 'nonmaxsuppression', 'mul', 'transpose', 'gather', 'shape', 'reshape', 'squeeze', 'conv', 'concat', 'slice', 'cast', 'reducemin', 'ceil', 'resize', 'add'} | {'nonmaxsuppression'} | error: scales() and sizes() can not both None/not None error: shape inference failed |
yolov4.onnx | {'exp', 'split', 'transpose', 'conv', 'shape', 'concat', 'leakyrelu', 'gather', 'reshape', 'slice', 'maxpool', 'cast', 'sigmoid', 'log', 'mul', 'resize', 'tanh', 'add'} | {} | succeeded |
zfnet512-3.onnx (deprecated) | {'conv', 'softmax', 'maxpool', 'lrn', 'relu', 'gemm'} | {} | error: Gemm with A should be a 2D tensor error: Failed to scan onnx.Gemm parameters successfully error: shape inference failed |
zfnet512-6.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm'} | {} | succeeded |
zfnet512-7.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm'} | {} | succeeded |
zfnet512-8.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm'} | {} | succeeded |
zfnet512-9.onnx | {'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm'} | {} | succeeded |
Looks like ONNX-MLIR supports 118 models, of which 101 models can be really compiled and 17 models failed to compile (12 models are deprecated)
Operator name | Count | Supported in onnx-mlir |
---|---|---|
conv | 119 | supported |
relu | 111 | supported |
maxpool | 101 | supported |
reshape | 84 | supported |
gemm | 79 | supported |
softmax | 71 | supported |
add | 63 | supported |
concat | 61 | supported |
dropout | 50 | supported |
batchnormalization | 46 | supported |
mul | 36 | supported |
averagepool | 34 | supported |
unsqueeze | 32 | supported |
lrn | 32 | supported |
transpose | 27 | supported |
shape | 26 | supported |
gather | 25 | supported |
constant | 24 | supported |
cast | 23 | supported |
div | 22 | supported |
globalaveragepool | 22 | supported |
sub | 21 | supported |
slice | 21 | supported |
matmul | 16 | supported |
flatten | 14 | supported |
squeeze | 13 | supported |
constantofshape | 12 | supported |
sum | 12 | supported |
upsample | 11 | supported |
instancenormalization | 10 | supported |
pad | 10 | supported |
sqrt | 10 | supported |
exp | 9 | supported |
reducemean | 9 | supported |
split | 8 | supported |
sigmoid | 8 | supported |
pow | 8 | supported |
floor | 7 | supported |
tanh | 7 | supported |
resize | 7 | supported |
log | 6 | supported |
leakyrelu | 6 | supported |
clip | 6 | supported |
tile | 5 | supported |
nonmaxsuppression | 5 | not supported |
reducemin | 5 | supported |
nonzero | 5 | supported |
identity | 4 | supported |
less | 4 | supported |
min | 3 | supported |
loop | 3 | supported |
topk | 3 | not supported |
ceil | 3 | supported |
expand | 3 | not supported |
where | 3 | supported |
equal | 3 | supported |
erf | 2 | supported |
roialign | 2 | not supported |
range | 2 | supported |
not | 2 | supported |
scatter | 2 | not supported |
reciprocal | 2 | supported |
neg | 2 | supported |
abs | 2 | supported |
greater | 2 | supported |
compress | 1 | not supported |
and | 1 | supported |
qlinearglobalaveragepool | 1 | not supported |
hardmax | 1 | not supported |
cumsum | 1 | not supported |
qlinearconv | 1 | not supported |
argmax | 1 | supported |
convtranspose | 1 | not supported |
lstm | 1 | supported |
round | 1 | supported |
onehot | 1 | not supported |
max | 1 | supported |
categorymapper | 1 | not supported |
qlinearmatmul | 1 | not supported |
lessorequal | 1 | supported |
reducemax | 1 | supported |
prelu | 1 | supported |
reducesum | 1 | supported |
dequantizelinear | 1 | not supported |
quantizelinear | 1 | not supported |
scan | 1 | supported |
qlinearadd | 1 | not supported |
@tungld great progress, thanks to all for adding operations. It might be interesting to remove the deprecated all together. How many non-deprecated benchmark are there total, and how did you decide which one is deprecated?
How many non-deprecated benchmark are there total
There are 128 models in total, of which 12 are deprecated. So 116 models are non-deprecated.
I considered 9 models in https://github.com/onnx/models/pull/389 are deprecated, plus 3 models using Opset 3 that I examined by myself that they used very old opset for BatchNormalization op.
Updated status for onnx-mIir Oct. 21
['abs', 'acos', 'acosh', 'add', 'and', 'argmax', 'asin', 'asinh', 'atan', 'atanh', 'averagepool', 'batchnormalization', 'cast', 'ceil', 'clip', 'concat', 'constant', 'constantofshape', 'conv', 'cos', 'cumsum', 'div', 'dropout', 'elu', 'equal', 'erf', 'exp', 'expand', 'flatten', 'floor', 'gather', 'gemm', 'globalaveragepool', 'globalmaxpool', 'greater', 'greaterorequal', 'gru', 'hardsigmoid', 'identity', 'instancenormalization', 'leakyrelu', 'less', 'lessorequal', 'log', 'logsoftmax', 'loop', 'lrn', 'lstm', 'matmul', 'max', 'maxpool', 'mean', 'min', 'mod', 'mul', 'neg', 'nonzero', 'not', 'onehot', 'or', 'pad', 'pow', 'prelu', 'range', 'reciprocal', 'reducel1', 'reducel2', 'reducelogsum', 'reducelogsumexp', 'reducemax', 'reducemean', 'reducemin', 'reduceprod', 'reducesum', 'reducesumsquare', 'relu', 'reshape', 'resize', 'rnn', 'round', 'scan', 'selu', 'shape', 'sigmoid', 'sign', 'sin', 'sinh', 'size', 'slice', 'softmax', 'softplus', 'softsign', 'split', 'sqrt', 'squeeze', 'sub', 'sum', 'tan', 'tanh', 'tile', 'transpose', 'unsqueeze', 'upsample', 'where', 'xor']
Looks like ONNX-MLIR supports 109 models, of which 103 models can be really compiled and 6 models failed to compile
ONNX model | Ops in the model | Ops not supported in onnx-mlir | Compilable with onnx-mlir |
---|---|---|---|
age_googlenet.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
arcfaceresnet100-8.onnx | {'mul', 'flatten', 'batchnormalization', 'reshape', 'conv', 'sub', 'prelu', 'add', 'gemm', 'identity', 'dropout'} | {} | succeeded |
bertsquad-10.onnx ['--repeatOnnxTransform=1'] | {'onehot', 'pow', 'add', 'split', 'squeeze', 'reciprocal', 'reshape', 'concat', 'sub', 'reducemean', 'constantofshape', 'mul', 'gather', 'cast', 'slice', 'unsqueeze', 'shape', 'identity', 'tanh', 'softmax', 'transpose', 'sqrt', 'matmul'} | {} | error: 'std.addi' op requires the same type for all operands and results |
bertsquad-8.onnx ['--repeatOnnxTransform=1'] | {'pow', 'add', 'split', 'squeeze', 'reciprocal', 'reshape', 'concat', 'sub', 'reducemean', 'mul', 'gather', 'cast', 'slice', 'unsqueeze', 'tile', 'shape', 'identity', 'tanh', 'softmax', 'transpose', 'sqrt', 'matmul'} | {} | succeeded |
bidaf-9.onnx | {'clip', 'conv', 'add', 'lstm', 'argmax', 'squeeze', 'reshape', 'concat', 'abs', 'sub', 'constantofshape', 'hardmax', 'dropout', 'sum', 'mul', 'gather', 'scan', 'slice', 'log', 'unsqueeze', 'sigmoid', 'relu', 'shape', 'categorymapper', 'softmax', 'ceil', 'transpose', 'reducesum', 'compress', 'reducemax', 'cast', 'matmul'} | {'hardmax', 'categorymapper', 'compress'} | onnx-mlir: /home/tungld/dl/onnx-mlir/src/Builder/SymbolTable.hpp:129: void onnx_mlir::SymbolMapping |
bvlcalexnet-6.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
bvlcalexnet-7.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
bvlcalexnet-8.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
bvlcalexnet-9.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
caffenet-6.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
caffenet-7.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
caffenet-8.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
caffenet-9.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
candy-8.onnx | {'instancenormalization', 'pad', 'conv', 'relu', 'add', 'upsample'} | {} | succeeded |
candy-9.onnx | {'mul', 'gather', 'div', 'instancenormalization', 'constant', 'concat', 'pad', 'slice', 'unsqueeze', 'conv', 'relu', 'add', 'shape', 'upsample', 'cast', 'floor'} | {} | succeeded |
densenet-6.onnx | {'mul', 'batchnormalization', 'concat', 'unsqueeze', 'averagepool', 'conv', 'relu', 'add', 'maxpool', 'globalaveragepool'} | {} | succeeded |
densenet-7.onnx | {'mul', 'batchnormalization', 'concat', 'unsqueeze', 'averagepool', 'conv', 'relu', 'add', 'maxpool', 'globalaveragepool'} | {} | succeeded |
densenet-8.onnx | {'mul', 'batchnormalization', 'concat', 'unsqueeze', 'averagepool', 'conv', 'relu', 'add', 'maxpool', 'globalaveragepool'} | {} | succeeded |
densenet-9.onnx | {'mul', 'batchnormalization', 'concat', 'unsqueeze', 'averagepool', 'conv', 'relu', 'add', 'maxpool', 'globalaveragepool'} | {} | succeeded |
efficientnet-lite4-11.onnx | {'squeeze', 'batchnormalization', 'softmax', 'clip', 'averagepool', 'transpose', 'conv', 'add', 'matmul'} | {} | succeeded |
emotion-ferplus-7.onnx | {'div', 'reshape', 'conv', 'sub', 'relu', 'add', 'maxpool', 'dropout', 'matmul'} | {} | succeeded |
emotion-ferplus-8.onnx | {'div', 'reshape', 'conv', 'sub', 'relu', 'add', 'maxpool', 'dropout', 'matmul'} | {} | succeeded |
fasterrcnn-10.onnx | {'greater', 'sqrt', 'constant', 'clip', 'conv', 'nonmaxsuppression', 'add', 'gemm', 'equal', 'floor', 'squeeze', 'div', 'reshape', 'reducemin', 'concat', 'sub', 'maxpool', 'constantofshape', 'topk', 'mul', 'exp', 'gather', 'slice', 'log', 'unsqueeze', 'sigmoid', 'relu', 'shape', 'resize', 'nonzero', 'scatter', 'flatten', 'roialign', 'expand', 'softmax', 'transpose', 'cast'} | {'scatter', 'nonmaxsuppression', 'roialign', 'topk'} | error: scales() and sizes() can not both None/not None error: shape inference failed |
fcn-resnet101-11.onnx | {'gather', 'constant', 'concat', 'slice', 'unsqueeze', 'conv', 'relu', 'shape', 'add', 'maxpool', 'cast', 'resize'} | {} | error: these modes() or coordinate_transformation_mode() not implemented yet error: shape inference failed |
fcn-resnet50-11.onnx | {'gather', 'constant', 'concat', 'slice', 'unsqueeze', 'conv', 'relu', 'shape', 'add', 'maxpool', 'cast', 'resize'} | {} | error: these modes() or coordinate_transformation_mode() not implemented yet error: shape inference failed |
gender_googlenet.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
googlenet-3.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
googlenet-6.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
googlenet-7.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
googlenet-8.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
googlenet-9.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
gpt2-10.onnx ['--repeatOnnxTransform=1'] | {'constant', 'pow', 'add', 'gemm', 'split', 'squeeze', 'div', 'reshape', 'concat', 'sub', 'reducemean', 'constantofshape', 'mul', 'gather', 'cast', 'slice', 'unsqueeze', 'shape', 'nonzero', 'tanh', 'softmax', 'transpose', 'sqrt', 'matmul'} | {} | succeeded |
gpt2-lm-head-10.onnx ['--repeatOnnxTransform=1'] | {'constant', 'pow', 'add', 'gemm', 'split', 'squeeze', 'div', 'reshape', 'concat', 'sub', 'reducemean', 'constantofshape', 'mul', 'gather', 'cast', 'slice', 'unsqueeze', 'where', 'shape', 'nonzero', 'tanh', 'softmax', 'transpose', 'sqrt', 'matmul'} | {} | loc("onnx.Cast"): error: 'std.trunci' op operand #0 must be signless-integer-like, but got 'ui8' |
inception-v1-6.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
inception-v1-7.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
inception-v1-8.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
inception-v1-9.onnx | {'reshape', 'concat', 'softmax', 'lrn', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
inception-v2-6.onnx | {'mul', 'batchnormalization', 'reshape', 'concat', 'softmax', 'averagepool', 'conv', 'relu', 'add', 'maxpool', 'gemm'} | {} | onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed. |
inception-v2-7.onnx | {'mul', 'batchnormalization', 'reshape', 'concat', 'softmax', 'unsqueeze', 'averagepool', 'conv', 'relu', 'add', 'maxpool', 'gemm'} | {} | succeeded |
inception-v2-8.onnx | {'mul', 'batchnormalization', 'reshape', 'concat', 'softmax', 'unsqueeze', 'averagepool', 'conv', 'relu', 'add', 'maxpool', 'gemm'} | {} | succeeded |
inception-v2-9.onnx | {'mul', 'batchnormalization', 'reshape', 'concat', 'softmax', 'unsqueeze', 'averagepool', 'conv', 'relu', 'add', 'maxpool', 'gemm'} | {} | succeeded |
maskrcnn-10.onnx | {'not', 'greater', 'sqrt', 'constant', 'clip', 'conv', 'nonmaxsuppression', 'add', 'gemm', 'equal', 'split', 'squeeze', 'floor', 'div', 'reshape', 'reducemin', 'concat', 'convtranspose', 'sub', 'maxpool', 'constantofshape', 'topk', 'mul', 'exp', 'gather', 'and', 'slice', 'log', 'unsqueeze', 'sigmoid', 'relu', 'shape', 'resize', 'nonzero', 'less', 'scatter', 'flatten', 'roialign', 'expand', 'softmax', 'transpose', 'cast'} | {'roialign', 'convtranspose', 'nonmaxsuppression', 'topk', 'scatter'} | error: scales() and sizes() can not both None/not None error: shape inference failed |
mnist-7.onnx | {'reshape', 'conv', 'relu', 'add', 'maxpool', 'matmul'} | {} | succeeded |
mnist-8.onnx | {'reshape', 'conv', 'relu', 'add', 'maxpool', 'matmul'} | {} | succeeded |
mobilenetv2-7.onnx | {'gather', 'reshape', 'constant', 'concat', 'clip', 'unsqueeze', 'conv', 'shape', 'add', 'globalaveragepool', 'gemm'} | {} | succeeded |
mosaic-8.onnx | {'instancenormalization', 'pad', 'conv', 'relu', 'add', 'upsample'} | {} | succeeded |
mosaic-9.onnx | {'mul', 'gather', 'div', 'instancenormalization', 'constant', 'concat', 'pad', 'slice', 'unsqueeze', 'conv', 'relu', 'add', 'shape', 'upsample', 'cast', 'floor'} | {} | succeeded |
pointilism-8.onnx | {'instancenormalization', 'pad', 'conv', 'relu', 'add', 'upsample'} | {} | succeeded |
pointilism-9.onnx | {'mul', 'gather', 'div', 'instancenormalization', 'constant', 'concat', 'pad', 'slice', 'unsqueeze', 'conv', 'relu', 'add', 'shape', 'upsample', 'cast', 'floor'} | {} | succeeded |
rain-princess-8.onnx | {'instancenormalization', 'pad', 'conv', 'relu', 'add', 'upsample'} | {} | succeeded |
rain-princess-9.onnx | {'mul', 'gather', 'div', 'instancenormalization', 'constant', 'concat', 'pad', 'slice', 'unsqueeze', 'conv', 'relu', 'add', 'shape', 'upsample', 'cast', 'floor'} | {} | succeeded |
rcnn-ilsvrc13-6.onnx | {'reshape', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
rcnn-ilsvrc13-7.onnx | {'reshape', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
rcnn-ilsvrc13-8.onnx | {'reshape', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
rcnn-ilsvrc13-9.onnx | {'reshape', 'lrn', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
resnet101-duc-7.onnx | {'batchnormalization', 'reshape', 'softmax', 'conv', 'relu', 'maxpool', 'sum'} | {} | succeeded |
resnet101-v1-7.onnx | {'flatten', 'batchnormalization', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
resnet101-v2-7.onnx | {'batchnormalization', 'reshape', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
resnet152-v1-7.onnx | {'flatten', 'batchnormalization', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
resnet152-v2-7.onnx | {'batchnormalization', 'reshape', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
resnet18-v1-7.onnx | {'flatten', 'batchnormalization', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
resnet18-v2-7.onnx | {'batchnormalization', 'reshape', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
resnet34-v1-7.onnx | {'flatten', 'batchnormalization', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
resnet34-v2-7.onnx | {'batchnormalization', 'reshape', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
resnet50-caffe2-v1-6.onnx | {'batchnormalization', 'reshape', 'softmax', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'sum'} | {} | succeeded |
resnet50-caffe2-v1-7.onnx | {'batchnormalization', 'reshape', 'softmax', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'sum'} | {} | succeeded |
resnet50-caffe2-v1-8.onnx | {'batchnormalization', 'reshape', 'softmax', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'sum'} | {} | succeeded |
resnet50-caffe2-v1-9.onnx | {'batchnormalization', 'reshape', 'softmax', 'averagepool', 'conv', 'relu', 'maxpool', 'gemm', 'sum'} | {} | succeeded |
resnet50-v1-12-int8.onnx | {'flatten', 'quantizelinear', 'dequantizelinear', 'qlinearconv', 'qlinearadd', 'qlinearmatmul', 'maxpool', 'qlinearglobalaveragepool'} | {'quantizelinear', 'dequantizelinear', 'qlinearconv', 'qlinearadd', 'qlinearmatmul', 'qlinearglobalaveragepool'} | error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked |
resnet50-v1-12.onnx | {'flatten', 'batchnormalization', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
resnet50-v1-7.onnx | {'flatten', 'batchnormalization', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
resnet50-v2-7.onnx | {'batchnormalization', 'reshape', 'conv', 'relu', 'add', 'maxpool', 'gemm', 'globalaveragepool'} | {} | succeeded |
retinanet-9.onnx | {'batchnormalization', 'conv', 'sigmoid', 'relu', 'add', 'upsample', 'maxpool'} | {} | succeeded |
roberta-base-11.onnx ['--repeatOnnxTransform=1'] | {'not', 'constant', 'pow', 'add', 'gemm', 'equal', 'cumsum', 'div', 'reshape', 'concat', 'sub', 'reducemean', 'constantofshape', 'mul', 'gather', 'cast', 'unsqueeze', 'shape', 'tanh', 'softmax', 'erf', 'transpose', 'sqrt', 'matmul'} | {} | succeeded |
roberta-sequence-classification-9.onnx ['--repeatOnnxTransform=1'] | {'constant', 'pow', 'add', 'gemm', 'squeeze', 'div', 'reshape', 'concat', 'sub', 'reducemean', 'constantofshape', 'mul', 'gather', 'cast', 'unsqueeze', 'shape', 'nonzero', 'expand', 'tanh', 'softmax', 'erf', 'transpose', 'sqrt', 'matmul'} | {} | succeeded |
shufflenet-6.onnx | {'batchnormalization', 'reshape', 'concat', 'softmax', 'averagepool', 'conv', 'transpose', 'relu', 'maxpool', 'gemm', 'sum'} | {} | succeeded |
shufflenet-7.onnx | {'batchnormalization', 'reshape', 'concat', 'softmax', 'averagepool', 'conv', 'transpose', 'relu', 'maxpool', 'gemm', 'sum'} | {} | succeeded |
shufflenet-8.onnx | {'batchnormalization', 'reshape', 'concat', 'softmax', 'averagepool', 'conv', 'transpose', 'relu', 'maxpool', 'gemm', 'sum'} | {} | succeeded |
shufflenet-9.onnx | {'batchnormalization', 'reshape', 'concat', 'softmax', 'averagepool', 'conv', 'transpose', 'relu', 'maxpool', 'gemm', 'sum'} | {} | succeeded |
shufflenet-v2-10.onnx | {'batchnormalization', 'reshape', 'concat', 'constant', 'conv', 'transpose', 'relu', 'gemm', 'maxpool', 'reducemean', 'split'} | {} | succeeded |
squeezenet1.0-3.onnx | {'concat', 'softmax', 'conv', 'relu', 'maxpool', 'dropout', 'globalaveragepool'} | {} | succeeded |
squeezenet1.0-6.onnx | {'concat', 'softmax', 'conv', 'relu', 'maxpool', 'dropout', 'globalaveragepool'} | {} | succeeded |
squeezenet1.0-7.onnx | {'concat', 'softmax', 'conv', 'relu', 'maxpool', 'dropout', 'globalaveragepool'} | {} | succeeded |
squeezenet1.0-8.onnx | {'concat', 'softmax', 'conv', 'relu', 'maxpool', 'dropout', 'globalaveragepool'} | {} | succeeded |
squeezenet1.0-9.onnx | {'concat', 'softmax', 'conv', 'relu', 'maxpool', 'dropout', 'globalaveragepool'} | {} | succeeded |
squeezenet1.1-7.onnx | {'reshape', 'concat', 'averagepool', 'conv', 'relu', 'maxpool', 'dropout'} | {} | succeeded |
ssd-10.onnx | {'constant', 'conv', 'nonmaxsuppression', 'add', 'squeeze', 'reshape', 'reducemin', 'concat', 'sub', 'maxpool', 'constantofshape', 'topk', 'mul', 'exp', 'gather', 'batchnormalization', 'slice', 'unsqueeze', 'relu', 'shape', 'softmax', 'transpose', 'cast'} | {'nonmaxsuppression', 'topk'} | error: onnx.NonMaxSuppression: inferShapes() not implemented error: shape inference failed |
ssd_mobilenet_v1_10.onnx | {'clip', 'conv', 'min', 'add', 'split', 'squeeze', 'div', 'reshape', 'concat', 'sub', 'constantofshape', 'mul', 'exp', 'gather', 'slice', 'unsqueeze', 'loop', 'sigmoid', 'tile', 'shape', 'less', 'transpose', 'cast'} | {} | error: scales() and sizes() can not both None/not None error: shape inference failed error: onnx.NonMaxSuppression: inferShapes() not implemented error: shape inference failed onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:245: U mlir::Type::cast() const [with U = mlir::MemRefType]: Assertion `isa()' failed. |
super-resolution-10.onnx | {'reshape', 'constant', 'conv', 'transpose', 'relu'} | {} | succeeded |
t5-decoder-with-lm-head-12.onnx | {'constant', 'pow', 'min', 'add', 'div', 'reshape', 'concat', 'sub', 'range', 'constantofshape', 'reducemean', 'mul', 'gather', 'cast', 'log', 'unsqueeze', 'tile', 'where', 'shape', 'lessorequal', 'relu', 'neg', 'less', 'max', 'softmax', 'transpose', 'sqrt', 'matmul'} | {} | succeeded |
t5-encoder-12.onnx ['--repeatOnnxTransform=1'] | {'constant', 'pow', 'min', 'add', 'div', 'reshape', 'concat', 'abs', 'sub', 'range', 'reducemean', 'constantofshape', 'mul', 'gather', 'cast', 'log', 'unsqueeze', 'where', 'shape', 'relu', 'neg', 'less', 'softmax', 'transpose', 'sqrt', 'matmul'} | {} | succeeded |
tiny-yolov3-11.onnx | {'round', 'conv', 'nonmaxsuppression', 'add', 'squeeze', 'div', 'reshape', 'reducemin', 'concat', 'sub', 'maxpool', 'leakyrelu', 'mul', 'exp', 'batchnormalization', 'slice', 'unsqueeze', 'loop', 'sigmoid', 'tile', 'shape', 'identity', 'resize', 'ceil', 'transpose', 'cast'} | {'nonmaxsuppression'} | SUCCEEDED |
tinyyolov2-7.onnx | {'mul', 'batchnormalization', 'conv', 'add', 'maxpool', 'leakyrelu'} | {} | succeeded |
tinyyolov2-8.onnx | {'mul', 'batchnormalization', 'conv', 'add', 'maxpool', 'leakyrelu'} | {} | succeeded |
udnie-8.onnx | {'instancenormalization', 'pad', 'conv', 'relu', 'add', 'upsample'} | {} | succeeded |
udnie-9.onnx | {'mul', 'gather', 'div', 'instancenormalization', 'constant', 'concat', 'pad', 'slice', 'unsqueeze', 'conv', 'relu', 'add', 'shape', 'upsample', 'cast', 'floor'} | {} | succeeded |
version-rfb-320.onnx | {'mul', 'exp', 'gather', 'batchnormalization', 'div', 'reshape', 'concat', 'constant', 'softmax', 'slice', 'unsqueeze', 'conv', 'transpose', 'relu', 'add', 'shape', 'sub'} | {} | succeeded |
version-rfb-640.onnx | {'mul', 'exp', 'gather', 'batchnormalization', 'div', 'reshape', 'concat', 'constant', 'softmax', 'slice', 'unsqueeze', 'conv', 'transpose', 'relu', 'add', 'shape', 'sub'} | {} | succeeded |
vgg16-7.onnx | {'flatten', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
vgg16-bn-7.onnx | {'flatten', 'batchnormalization', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
vgg19-7.onnx | {'flatten', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
vgg19-bn-7.onnx | {'flatten', 'batchnormalization', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
vgg19-caffe2-6.onnx | {'reshape', 'softmax', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
vgg19-caffe2-7.onnx | {'reshape', 'softmax', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
vgg19-caffe2-8.onnx | {'reshape', 'softmax', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
vgg19-caffe2-9.onnx | {'reshape', 'softmax', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
vgg_ilsvrc_16_age_chalearn_iccv2015.onnx | {'reshape', 'softmax', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
vgg_ilsvrc_16_age_imdb_wiki.onnx | {'reshape', 'softmax', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
vgg_ilsvrc_16_gender_imdb_wiki.onnx | {'reshape', 'softmax', 'conv', 'relu', 'maxpool', 'gemm', 'dropout'} | {} | succeeded |
yolov2-coco-9.onnx | {'batchnormalization', 'reshape', 'constant', 'concat', 'conv', 'transpose', 'maxpool', 'leakyrelu'} | {} | succeeded |
yolov3-10.onnx | {'conv', 'nonmaxsuppression', 'add', 'squeeze', 'div', 'reshape', 'reducemin', 'concat', 'sub', 'leakyrelu', 'mul', 'exp', 'gather', 'batchnormalization', 'slice', 'unsqueeze', 'loop', 'sigmoid', 'tile', 'shape', 'resize', 'ceil', 'transpose', 'cast'} | {'nonmaxsuppression'} | error: scales() and sizes() can not both None/not None error: shape inference failed |
yolov4.onnx | {'mul', 'exp', 'gather', 'reshape', 'tanh', 'concat', 'slice', 'log', 'transpose', 'conv', 'sigmoid', 'add', 'shape', 'maxpool', 'cast', 'resize', 'leakyrelu', 'split'} | {} | succeeded |
zfnet512-6.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm'} | {} | succeeded |
zfnet512-7.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm'} | {} | succeeded |
zfnet512-8.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm'} | {} | succeeded |
zfnet512-9.onnx | {'reshape', 'softmax', 'lrn', 'conv', 'relu', 'maxpool', 'gemm'} | {} | succeeded |
Operator name | Count | Supported in onnx-mlir |
---|---|---|
conv | 107 | supported |
relu | 99 | supported |
maxpool | 89 | supported |
reshape | 80 | supported |
gemm | 70 | supported |
softmax | 63 | supported |
add | 59 | supported |
concat | 57 | supported |
dropout | 44 | supported |
batchnormalization | 42 | supported |
mul | 34 | supported |
unsqueeze | 32 | supported |
averagepool | 29 | supported |
lrn | 27 | supported |
shape | 26 | supported |
transpose | 26 | supported |
gather | 25 | supported |
cast | 23 | supported |
constant | 22 | supported |
globalaveragepool | 21 | supported |
slice | 21 | supported |
div | 20 | supported |
sub | 20 | supported |
flatten | 14 | supported |
matmul | 14 | supported |
squeeze | 13 | supported |
constantofshape | 12 | supported |
upsample | 11 | supported |
instancenormalization | 10 | supported |
sum | 10 | supported |
pad | 10 | supported |
sqrt | 10 | supported |
exp | 9 | supported |
reducemean | 9 | supported |
pow | 8 | supported |
split | 8 | supported |
sigmoid | 8 | supported |
floor | 7 | supported |
resize | 7 | supported |
tanh | 7 | supported |
leakyrelu | 6 | supported |
log | 6 | supported |
clip | 6 | supported |
nonzero | 5 | supported |
nonmaxsuppression | 5 | SUPPORTED |
reducemin | 5 | supported |
tile | 5 | supported |
identity | 4 | supported |
less | 4 | supported |
topk | 3 | not supported |
loop | 3 | supported |
ceil | 3 | supported |
min | 3 | supported |
equal | 3 | supported |
where | 3 | supported |
expand | 3 | supported |
greater | 2 | supported |
not | 2 | supported |
reciprocal | 2 | supported |
abs | 2 | supported |
range | 2 | supported |
neg | 2 | supported |
scatter | 2 | not supported |
roialign | 2 | not supported |
erf | 2 | supported |
round | 1 | supported |
onehot | 1 | supported |
argmax | 1 | supported |
convtranspose | 1 | not supported |
hardmax | 1 | SUPPORTED |
qlinearglobalaveragepool | 1 | not supported |
scan | 1 | supported |
qlinearadd | 1 | not supported |
categorymapper | 1 | not supported |
reducesum | 1 | supported |
qlinearmatmul | 1 | not supported |
reducemax | 1 | supported |
quantizelinear | 1 | not supported |
lstm | 1 | supported |
cumsum | 1 | supported |
dequantizelinear | 1 | not supported |
and | 1 | supported |
prelu | 1 | supported |
lessorequal | 1 | supported |
qlinearconv | 1 | not supported |
max | 1 | supported |
compress | 1 | SUPPORTED |
Tung: Update (Oct. 27), newly supported ops: Compress, Hardmax, NonMaxSuppression. Tiny-yolo3-11 can be compiled.
We examine 116 models out of 128 models in the ONNX model zoo (12 models are excluded because they use quite old opset, < 3).
Out of 116 models:
ONNX model | Ops in the model | Ops not supported in onnx-mlir | Compilable with onnx-mlir |
---|---|---|---|
age_googlenet.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
arcfaceresnet100-8.onnx | {'batchnormalization', 'prelu', 'identity', 'add', 'conv', 'reshape', 'dropout', 'flatten', 'mul', 'gemm', 'sub'} | {} | succeeded |
bertsquad-10.onnx | {'gather', 'transpose', 'reshape', 'pow', 'sub', 'slice', 'constantofshape', 'softmax', 'cast', 'mul', 'tanh', 'identity', 'split', 'add', 'reciprocal', 'unsqueeze', 'shape', 'squeeze', 'onehot', 'matmul', 'sqrt', 'concat', 'reducemean'} | {} | succeeded |
bertsquad-8.onnx | {'gather', 'transpose', 'reshape', 'pow', 'sub', 'slice', 'tile', 'softmax', 'cast', 'mul', 'tanh', 'identity', 'split', 'add', 'reciprocal', 'unsqueeze', 'shape', 'squeeze', 'matmul', 'sqrt', 'concat', 'reducemean'} | {} | succeeded |
bidaf-9.onnx | {'abs', 'gather', 'sigmoid', 'transpose', 'compress', 'conv', 'relu', 'reshape', 'sum', 'sub', 'lstm', 'slice', 'reducemax', 'constantofshape', 'softmax', 'argmax', 'cast', 'mul', 'add', 'unsqueeze', 'clip', 'ceil', 'categorymapper', 'squeeze', 'matmul', 'log', 'scan', 'hardmax', 'reducesum', 'dropout', 'concat', 'shape'} | {'categorymapper'} | onnx-mlir: /home/tungld/dl/onnx-mlir/src/Builder/SymbolTable.hpp:129: void onnx_mlir::SymbolMapping |
bvlcalexnet-6.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
bvlcalexnet-7.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
bvlcalexnet-8.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
bvlcalexnet-9.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
caffenet-6.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
caffenet-7.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
caffenet-8.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
caffenet-9.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
candy-8.onnx | {'add', 'upsample', 'relu', 'conv', 'pad', 'instancenormalization'} | {} | succeeded |
candy-9.onnx | {'slice', 'gather', 'add', 'mul', 'div', 'cast', 'relu', 'conv', 'floor', 'unsqueeze', 'upsample', 'pad', 'instancenormalization', 'concat', 'constant', 'shape'} | {} | succeeded |
densenet-6.onnx | {'batchnormalization', 'add', 'maxpool', 'globalaveragepool', 'conv', 'relu', 'unsqueeze', 'averagepool', 'concat', 'mul'} | {} | succeeded |
densenet-7.onnx | {'batchnormalization', 'add', 'maxpool', 'globalaveragepool', 'conv', 'relu', 'unsqueeze', 'averagepool', 'concat', 'mul'} | {} | succeeded |
densenet-8.onnx | {'batchnormalization', 'add', 'maxpool', 'globalaveragepool', 'conv', 'relu', 'unsqueeze', 'averagepool', 'concat', 'mul'} | {} | succeeded |
densenet-9.onnx | {'batchnormalization', 'add', 'maxpool', 'globalaveragepool', 'conv', 'relu', 'unsqueeze', 'averagepool', 'concat', 'mul'} | {} | succeeded |
efficientnet-lite4-11.onnx | {'batchnormalization', 'squeeze', 'matmul', 'add', 'transpose', 'conv', 'softmax', 'clip', 'averagepool'} | {} | succeeded |
emotion-ferplus-7.onnx | {'matmul', 'add', 'maxpool', 'div', 'conv', 'reshape', 'relu', 'dropout', 'sub'} | {} | succeeded |
emotion-ferplus-8.onnx | {'matmul', 'add', 'maxpool', 'div', 'conv', 'reshape', 'relu', 'dropout', 'sub'} | {} | succeeded |
fasterrcnn-10.onnx | {'resize', 'gather', 'roialign', 'div', 'transpose', 'sigmoid', 'conv', 'relu', 'reshape', 'flatten', 'constant', 'floor', 'sub', 'nonzero', 'slice', 'maxpool', 'greater', 'constantofshape', 'softmax', 'cast', 'mul', 'topk', 'add', 'exp', 'nonmaxsuppression', 'unsqueeze', 'clip', 'gemm', 'squeeze', 'log', 'reducemin', 'expand', 'equal', 'sqrt', 'scatter', 'concat', 'shape'} | {'roialign', 'scatter'} | error: onnx.RoiAlign: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference. error: shape inference failed |
fcn-resnet101-11.onnx | {'slice', 'resize', 'gather', 'maxpool', 'add', 'relu', 'conv', 'unsqueeze', 'cast', 'concat', 'constant', 'shape'} | {} | error: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: pytorch_half_pixel error: shape inference failed |
fcn-resnet50-11.onnx | {'slice', 'resize', 'gather', 'maxpool', 'add', 'relu', 'conv', 'unsqueeze', 'cast', 'concat', 'constant', 'shape'} | {} | error: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: pytorch_half_pixel error: shape inference failed |
gender_googlenet.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
googlenet-3.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
googlenet-6.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
googlenet-7.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
googlenet-8.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
googlenet-9.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
gpt2-10.onnx | {'gather', 'div', 'transpose', 'reshape', 'pow', 'constant', 'sub', 'nonzero', 'slice', 'constantofshape', 'softmax', 'cast', 'mul', 'tanh', 'split', 'add', 'unsqueeze', 'shape', 'gemm', 'squeeze', 'matmul', 'sqrt', 'concat', 'reducemean'} | {} | succeeded |
gpt2-lm-head-10.onnx | {'gather', 'div', 'transpose', 'reshape', 'pow', 'constant', 'sub', 'nonzero', 'slice', 'constantofshape', 'softmax', 'cast', 'where', 'mul', 'tanh', 'split', 'add', 'unsqueeze', 'shape', 'gemm', 'squeeze', 'matmul', 'sqrt', 'concat', 'reducemean'} | {} | loc("onnx.Cast"): error: 'arith.constant' op integer return type must be signless |
inception-v1-6.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
inception-v1-7.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
inception-v1-8.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
inception-v1-9.onnx | {'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'} | {} | succeeded |
inception-v2-6.onnx | {'batchnormalization', 'add', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'concat', 'mul', 'gemm'} | {} | onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:235: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed. |
inception-v2-7.onnx | {'batchnormalization', 'add', 'maxpool', 'conv', 'relu', 'reshape', 'unsqueeze', 'softmax', 'averagepool', 'concat', 'mul', 'gemm'} | {} | succeeded |
inception-v2-8.onnx | {'batchnormalization', 'add', 'maxpool', 'conv', 'relu', 'reshape', 'unsqueeze', 'softmax', 'averagepool', 'concat', 'mul', 'gemm'} | {} | succeeded |
inception-v2-9.onnx | {'batchnormalization', 'add', 'maxpool', 'conv', 'relu', 'reshape', 'unsqueeze', 'softmax', 'averagepool', 'concat', 'mul', 'gemm'} | {} | succeeded |
maskrcnn-10.onnx | {'resize', 'gather', 'roialign', 'div', 'transpose', 'sigmoid', 'conv', 'relu', 'reshape', 'flatten', 'constant', 'floor', 'sub', 'nonzero', 'slice', 'maxpool', 'and', 'greater', 'constantofshape', 'softmax', 'cast', 'mul', 'split', 'topk', 'add', 'exp', 'nonmaxsuppression', 'unsqueeze', 'clip', 'gemm', 'convtranspose', 'squeeze', 'log', 'reducemin', 'expand', 'less', 'equal', 'sqrt', 'not', 'scatter', 'concat', 'shape'} | {'roialign', 'convtranspose', 'scatter'} | error: onnx.RoiAlign: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference. error: shape inference failed |
mnist-7.onnx | {'matmul', 'add', 'maxpool', 'relu', 'conv', 'reshape'} | {} | succeeded |
mnist-8.onnx | {'matmul', 'add', 'maxpool', 'relu', 'conv', 'reshape'} | {} | succeeded |
mobilenetv2-7.onnx | {'gemm', 'gather', 'add', 'globalaveragepool', 'conv', 'reshape', 'unsqueeze', 'clip', 'concat', 'constant', 'shape'} | {} | succeeded |
mosaic-8.onnx | {'add', 'upsample', 'relu', 'conv', 'pad', 'instancenormalization'} | {} | succeeded |
mosaic-9.onnx | {'slice', 'gather', 'add', 'mul', 'div', 'cast', 'relu', 'conv', 'floor', 'unsqueeze', 'upsample', 'pad', 'instancenormalization', 'concat', 'constant', 'shape'} | {} | succeeded |
pointilism-8.onnx | {'add', 'upsample', 'relu', 'conv', 'pad', 'instancenormalization'} | {} | succeeded |
pointilism-9.onnx | {'slice', 'gather', 'add', 'mul', 'div', 'cast', 'relu', 'conv', 'floor', 'unsqueeze', 'upsample', 'pad', 'instancenormalization', 'concat', 'constant', 'shape'} | {} | succeeded |
rain-princess-8.onnx | {'add', 'upsample', 'relu', 'conv', 'pad', 'instancenormalization'} | {} | succeeded |
rain-princess-9.onnx | {'slice', 'gather', 'add', 'mul', 'div', 'cast', 'relu', 'conv', 'floor', 'unsqueeze', 'upsample', 'pad', 'instancenormalization', 'concat', 'constant', 'shape'} | {} | succeeded |
rcnn-ilsvrc13-6.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'dropout', 'gemm'} | {} | succeeded |
rcnn-ilsvrc13-7.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'dropout', 'gemm'} | {} | succeeded |
rcnn-ilsvrc13-8.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'dropout', 'gemm'} | {} | succeeded |
rcnn-ilsvrc13-9.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'dropout', 'gemm'} | {} | succeeded |
resnet101-duc-7.onnx | {'batchnormalization', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'sum'} | {} | succeeded |
resnet101-v1-7.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'} | {} | succeeded |
resnet101-v2-7.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'reshape', 'gemm'} | {} | succeeded |
resnet152-v1-7.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'} | {} | succeeded |
resnet152-v2-7.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'reshape', 'gemm'} | {} | succeeded |
resnet18-v1-7.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'} | {} | succeeded |
resnet18-v2-7.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'reshape', 'gemm'} | {} | succeeded |
resnet34-v1-7.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'} | {} | succeeded |
resnet34-v2-7.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'reshape', 'gemm'} | {} | succeeded |
resnet50-caffe2-v1-6.onnx | {'batchnormalization', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'sum', 'gemm'} | {} | succeeded |
resnet50-caffe2-v1-7.onnx | {'batchnormalization', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'sum', 'gemm'} | {} | succeeded |
resnet50-caffe2-v1-8.onnx | {'batchnormalization', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'sum', 'gemm'} | {} | succeeded |
resnet50-caffe2-v1-9.onnx | {'batchnormalization', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'sum', 'gemm'} | {} | succeeded |
resnet50-v1-12-int8.onnx | {'qlinearmatmul', 'maxpool', 'qlinearglobalaveragepool', 'dequantizelinear', 'quantizelinear', 'qlinearconv', 'qlinearadd', 'flatten'} | {'qlinearmatmul', 'dequantizelinear', 'qlinearglobalaveragepool', 'quantizelinear', 'qlinearconv', 'qlinearadd'} | error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked error: not ranked |
resnet50-v1-12.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'} | {} | succeeded |
resnet50-v1-7.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'} | {} | succeeded |
resnet50-v2-7.onnx | {'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'reshape', 'gemm'} | {} | succeeded |
retinanet-9.onnx | {'batchnormalization', 'maxpool', 'add', 'sigmoid', 'upsample', 'relu', 'conv'} | {} | succeeded |
roberta-base-11.onnx | {'gather', 'div', 'transpose', 'reshape', 'pow', 'constant', 'sub', 'constantofshape', 'softmax', 'cast', 'mul', 'tanh', 'add', 'unsqueeze', 'shape', 'cumsum', 'gemm', 'matmul', 'equal', 'sqrt', 'not', 'erf', 'concat', 'reducemean'} | {} | succeeded |
roberta-sequence-classification-9.onnx | {'gather', 'div', 'transpose', 'reshape', 'pow', 'constant', 'sub', 'nonzero', 'constantofshape', 'softmax', 'cast', 'mul', 'tanh', 'add', 'unsqueeze', 'shape', 'gemm', 'squeeze', 'matmul', 'expand', 'sqrt', 'erf', 'concat', 'reducemean'} | {} | succeeded |
shufflenet-6.onnx | {'batchnormalization', 'maxpool', 'transpose', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'concat', 'sum', 'gemm'} | {} | succeeded |
shufflenet-7.onnx | {'batchnormalization', 'maxpool', 'transpose', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'concat', 'sum', 'gemm'} | {} | succeeded |
shufflenet-8.onnx | {'batchnormalization', 'maxpool', 'transpose', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'concat', 'sum', 'gemm'} | {} | succeeded |
shufflenet-9.onnx | {'batchnormalization', 'maxpool', 'transpose', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'concat', 'sum', 'gemm'} | {} | succeeded |
shufflenet-v2-10.onnx | {'batchnormalization', 'gemm', 'split', 'maxpool', 'transpose', 'relu', 'conv', 'reshape', 'concat', 'constant', 'reducemean'} | {} | succeeded |
squeezenet1.0-3.onnx | {'maxpool', 'globalaveragepool', 'relu', 'conv', 'softmax', 'dropout', 'concat'} | {} | succeeded |
squeezenet1.0-6.onnx | {'maxpool', 'globalaveragepool', 'relu', 'conv', 'softmax', 'dropout', 'concat'} | {} | succeeded |
squeezenet1.0-7.onnx | {'maxpool', 'globalaveragepool', 'relu', 'conv', 'softmax', 'dropout', 'concat'} | {} | succeeded |
squeezenet1.0-8.onnx | {'maxpool', 'globalaveragepool', 'relu', 'conv', 'softmax', 'dropout', 'concat'} | {} | succeeded |
squeezenet1.0-9.onnx | {'maxpool', 'globalaveragepool', 'relu', 'conv', 'softmax', 'dropout', 'concat'} | {} | succeeded |
squeezenet1.1-7.onnx | {'maxpool', 'relu', 'conv', 'reshape', 'dropout', 'averagepool', 'concat'} | {} | succeeded |
ssd-10.onnx | {'gather', 'transpose', 'relu', 'conv', 'reshape', 'constant', 'sub', 'slice', 'maxpool', 'softmax', 'constantofshape', 'cast', 'mul', 'topk', 'add', 'exp', 'nonmaxsuppression', 'unsqueeze', 'batchnormalization', 'squeeze', 'reducemin', 'concat', 'shape'} | {} | succeeded |
ssd_mobilenet_v1_10.onnx | {'loop', 'gather', 'div', 'transpose', 'sigmoid', 'reshape', 'conv', 'sub', 'slice', 'tile', 'constantofshape', 'cast', 'mul', 'split', 'add', 'exp', 'unsqueeze', 'clip', 'min', 'squeeze', 'less', 'concat', 'shape'} | {} | error: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: half_pixel error: shape inference failed error: onnx.If: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference. error: shape inference failed error: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: half_pixel error: shape inference failed error: onnx.If: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference. error: shape inference failed error: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: half_pixel error: shape inference failed error: onnx.If: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference. error: shape inference failed error: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: half_pixel error: shape inference failed error: onnx.If: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference. error: shape inference failed Loop op doesn't support dynamic dimensions for scan output. UNREACHABLE executed at /home/tungld/dl/onnx-mlir/src/Conversion/ONNXToKrnl/ControlFlow/Loop.cpp:255! |
super-resolution-10.onnx | {'transpose', 'relu', 'conv', 'reshape', 'constant'} | {} | succeeded |
t5-decoder-with-lm-head-12.onnx | {'gather', 'div', 'transpose', 'reshape', 'neg', 'range', 'relu', 'lessorequal', 'pow', 'constant', 'sub', 'tile', 'constantofshape', 'softmax', 'cast', 'where', 'mul', 'add', 'max', 'unsqueeze', 'shape', 'min', 'matmul', 'log', 'less', 'sqrt', 'concat', 'reducemean'} | {} | succeeded |
t5-encoder-12.onnx | {'abs', 'gather', 'div', 'transpose', 'reshape', 'neg', 'range', 'relu', 'pow', 'constant', 'sub', 'constantofshape', 'softmax', 'cast', 'where', 'mul', 'add', 'unsqueeze', 'shape', 'min', 'matmul', 'log', 'less', 'sqrt', 'concat', 'reducemean'} | {} | succeeded |
tiny-yolov3-11.onnx | {'resize', 'loop', 'div', 'transpose', 'sigmoid', 'conv', 'reshape', 'sub', 'slice', 'maxpool', 'tile', 'cast', 'mul', 'identity', 'exp', 'add', 'nonmaxsuppression', 'unsqueeze', 'ceil', 'batchnormalization', 'squeeze', 'round', 'reducemin', 'leakyrelu', 'concat', 'shape'} | {} | succeeded |
tinyyolov2-7.onnx | {'batchnormalization', 'add', 'maxpool', 'conv', 'leakyrelu', 'mul'} | {} | succeeded |
tinyyolov2-8.onnx | {'batchnormalization', 'add', 'maxpool', 'conv', 'leakyrelu', 'mul'} | {} | succeeded |
udnie-8.onnx | {'add', 'upsample', 'relu', 'conv', 'pad', 'instancenormalization'} | {} | succeeded |
udnie-9.onnx | {'slice', 'gather', 'add', 'mul', 'div', 'cast', 'relu', 'conv', 'floor', 'unsqueeze', 'upsample', 'pad', 'instancenormalization', 'concat', 'constant', 'shape'} | {} | succeeded |
version-rfb-320.onnx | {'batchnormalization', 'slice', 'gather', 'add', 'exp', 'mul', 'transpose', 'div', 'relu', 'conv', 'reshape', 'unsqueeze', 'softmax', 'concat', 'constant', 'shape', 'sub'} | {} | succeeded |
version-rfb-640.onnx | {'batchnormalization', 'slice', 'gather', 'add', 'exp', 'mul', 'transpose', 'div', 'relu', 'conv', 'reshape', 'unsqueeze', 'softmax', 'concat', 'constant', 'shape', 'sub'} | {} | succeeded |
vgg16-7.onnx | {'maxpool', 'relu', 'conv', 'dropout', 'flatten', 'gemm'} | {} | succeeded |
vgg16-bn-7.onnx | {'batchnormalization', 'maxpool', 'relu', 'conv', 'dropout', 'flatten', 'gemm'} | {} | succeeded |
vgg19-7.onnx | {'maxpool', 'relu', 'conv', 'dropout', 'flatten', 'gemm'} | {} | succeeded |
vgg19-bn-7.onnx | {'batchnormalization', 'maxpool', 'relu', 'conv', 'dropout', 'flatten', 'gemm'} | {} | succeeded |
vgg19-caffe2-6.onnx | {'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
vgg19-caffe2-7.onnx | {'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
vgg19-caffe2-8.onnx | {'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
vgg19-caffe2-9.onnx | {'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
vgg_ilsvrc_16_age_chalearn_iccv2015.onnx | {'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
vgg_ilsvrc_16_age_imdb_wiki.onnx | {'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
vgg_ilsvrc_16_gender_imdb_wiki.onnx | {'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'} | {} | succeeded |
yolov2-coco-9.onnx | {'batchnormalization', 'maxpool', 'transpose', 'conv', 'reshape', 'leakyrelu', 'concat', 'constant'} | {} | succeeded |
yolov3-10.onnx | {'resize', 'loop', 'gather', 'div', 'transpose', 'sigmoid', 'conv', 'reshape', 'sub', 'slice', 'tile', 'cast', 'mul', 'add', 'exp', 'nonmaxsuppression', 'unsqueeze', 'ceil', 'batchnormalization', 'squeeze', 'reducemin', 'leakyrelu', 'concat', 'shape'} | {} | succeeded |
yolov4.onnx | {'log', 'slice', 'resize', 'gather', 'split', 'exp', 'add', 'mul', 'transpose', 'maxpool', 'sigmoid', 'conv', 'reshape', 'cast', 'leakyrelu', 'concat', 'tanh', 'shape'} | {} | succeeded |
zfnet512-6.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'gemm'} | {} | succeeded |
zfnet512-7.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'gemm'} | {} | succeeded |
zfnet512-8.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'gemm'} | {} | succeeded |
zfnet512-9.onnx | {'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'gemm'} | {} | succeeded |
Looks like ONNX-MLIR supports 112 models, of which 107 models can be really compiled and 5 models failed to compile
Operator name | Count | Supported in onnx-mlir |
---|---|---|
conv | 107 | supported |
relu | 99 | supported |
maxpool | 89 | supported |
reshape | 80 | supported |
gemm | 70 | supported |
softmax | 63 | supported |
add | 59 | supported |
concat | 57 | supported |
dropout | 44 | supported |
batchnormalization | 42 | supported |
mul | 34 | supported |
unsqueeze | 32 | supported |
averagepool | 29 | supported |
lrn | 27 | supported |
shape | 26 | supported |
transpose | 26 | supported |
gather | 25 | supported |
cast | 23 | supported |
constant | 22 | supported |
slice | 21 | supported |
globalaveragepool | 21 | supported |
sub | 20 | supported |
div | 20 | supported |
flatten | 14 | supported |
matmul | 14 | supported |
squeeze | 13 | supported |
constantofshape | 12 | supported |
upsample | 11 | supported |
sum | 10 | supported |
instancenormalization | 10 | supported |
pad | 10 | supported |
sqrt | 10 | supported |
exp | 9 | supported |
reducemean | 9 | supported |
split | 8 | supported |
sigmoid | 8 | supported |
pow | 8 | supported |
tanh | 7 | supported |
resize | 7 | supported |
floor | 7 | supported |
clip | 6 | supported |
leakyrelu | 6 | supported |
log | 6 | supported |
nonmaxsuppression | 5 | supported |
reducemin | 5 | supported |
nonzero | 5 | supported |
tile | 5 | supported |
identity | 4 | supported |
less | 4 | supported |
loop | 3 | supported |
topk | 3 | supported |
expand | 3 | supported |
equal | 3 | supported |
where | 3 | supported |
ceil | 3 | supported |
min | 3 | supported |
abs | 2 | supported |
neg | 2 | supported |
reciprocal | 2 | supported |
not | 2 | supported |
roialign | 2 | not supported |
range | 2 | supported |
greater | 2 | supported |
scatter | 2 | supported |
erf | 2 | supported |
prelu | 1 | supported |
qlinearmatmul | 1 | not supported |
compress | 1 | supported |
and | 1 | supported |
dequantizelinear | 1 | not supported |
convtranspose | 1 | not supported |
categorymapper | 1 | supported |
round | 1 | supported |
quantizelinear | 1 | not supported |
lessorequal | 1 | supported |
qlinearadd | 1 | not supported |
lstm | 1 | supported |
reducemax | 1 | supported |
argmax | 1 | supported |
max | 1 | supported |
qlinearconv | 1 | not supported |
cumsum | 1 | supported |
scan | 1 | supported |
hardmax | 1 | supported |
onehot | 1 | supported |
qlinearglobalaveragepool | 1 | not supported |
reducesum | 1 | supported |
Updated on April 28, 2022: this is the first time we check end-to-end runs for all models in the model zoo. Before this, we only checked the compilation phase.
Tested 165 models in the model zoo in which 102 models run correctly (meaning correct inference results).
165 models tested: gpt2-10, gpt2-lm-head-10, bidaf-9, t5-decoder-with-lm-head-12, t5-encoder-12, bertsquad-12-int8, bertsquad-12, bertsquad-8, bertsquad-10, roberta-sequence-classification-9, roberta-base-11, super-resolution-10, arcfaceresnet100-8, emotion-ferplus-8, emotion-ferplus-2, emotion-ferplus-7, inception-v1-7, inception-v1-6, inception-v1-9, inception-v1-12-int8, inception-v1-3, inception-v1-12, inception-v1-8, googlenet-3, googlenet-9, googlenet-12-int8, googlenet-12, googlenet-8, googlenet-6, googlenet-7, inception-v2-8, inception-v2-6, inception-v2-9, inception-v2-3, inception-v2-7, mnist-7, mnist-8, mnist-1, rcnn-ilsvrc13-9, rcnn-ilsvrc13-7, rcnn-ilsvrc13-8, rcnn-ilsvrc13-3, rcnn-ilsvrc13-6, zfnet512-6, zfnet512-7, zfnet512-12, zfnet512-3, zfnet512-8, zfnet512-9, zfnet512-12-int8, caffenet-12-int8, caffenet-9, caffenet-12, caffenet-7, caffenet-6, caffenet-8, caffenet-3, mobilenetv2-12, mobilenetv2-7, mobilenetv2-12-int8, squeezenet1.1-7, squeezenet1.0-6, squeezenet1.0-12-int8, squeezenet1.0-7, squeezenet1.0-8, squeezenet1.0-3, squeezenet1.0-9, squeezenet1.0-12, densenet-8, densenet-9, densenet-3, densenet-6, densenet-7, resnet50-v1-7, resnet101-v1-7, resnet50-caffe2-v1-9, resnet50-caffe2-v1-3, resnet34-v1-7, resnet50-caffe2-v1-6, resnet50-caffe2-v1-7, resnet152-v2-7, resnet50-caffe2-v1-8, resnet18-v1-7, resnet18-v2-7, resnet34-v2-7, resnet50-v1-12-int8, resnet50-v2-7, resnet101-v2-7, resnet152-v1-7, resnet50-v1-12, efficientnet-lite4-11-int8, efficientnet-lite4-11, bvlcalexnet-9, bvlcalexnet-7, bvlcalexnet-12, bvlcalexnet-6, bvlcalexnet-3, bvlcalexnet-12-int8, bvlcalexnet-8, vgg16-12-int8, vgg19-bn-7, vgg19-caffe2-3, vgg19-caffe2-7, vgg19-7, vgg16-7, vgg16-bn-7, vgg19-caffe2-9, vgg16-12, vgg19-caffe2-6, vgg19-caffe2-8, shufflenet-3, shufflenet-v2-10, shufflenet-v2-12, shufflenet-9, shufflenet-6, shufflenet-v2-12-int8, shufflenet-7, shufflenet-8, yolov3-10, FasterRCNN-12-int8, FasterRCNN-12, FasterRCNN-10, fcn-resnet50-12, fcn-resnet50-11, fcn-resnet101-11, fcn-resnet50-12-int8, yolov4, ssd-12, ssd-12-int8, ssd-10, ResNet101-DUC-7, retinanet-9, tinyyolov2-7, tinyyolov2-8, ssd_mobilenet_v1_10, ssd_mobilenet_v1_12, ssd_mobilenet_v1_12-int8, MaskRCNN-10, tiny-yolov3-11, udnie-9, pointilism-9, mosaic-9, udnie-8, candy-8, pointilism-8, rain-princess-8, rain-princess-9, mosaic-8, candy-9
102 models passed: gpt2-10, gpt2-lm-head-10, t5-decoder-with-lm-head-12, t5-encoder-12, bertsquad-12, bertsquad-10, roberta-sequence-classification-9, roberta-base-11, super-resolution-10, arcfaceresnet100-8, emotion-ferplus-8, emotion-ferplus-7, inception-v1-7, inception-v1-6, inception-v1-9, inception-v1-12, inception-v1-8, googlenet-3, googlenet-9, googlenet-12, googlenet-8, googlenet-6, googlenet-7, inception-v2-8, inception-v2-9, inception-v2-7, mnist-7, mnist-8, rcnn-ilsvrc13-9, rcnn-ilsvrc13-7, rcnn-ilsvrc13-8, rcnn-ilsvrc13-6, zfnet512-7, zfnet512-12, zfnet512-8, zfnet512-9, caffenet-9, caffenet-12, caffenet-7, caffenet-6, caffenet-8, mobilenetv2-7, squeezenet1.1-7, squeezenet1.0-6, squeezenet1.0-7, squeezenet1.0-8, squeezenet1.0-3, squeezenet1.0-9, squeezenet1.0-12, densenet-8, densenet-9, densenet-6, densenet-7, resnet50-v1-7, resnet101-v1-7, resnet50-caffe2-v1-9, resnet34-v1-7, resnet50-caffe2-v1-6, resnet50-caffe2-v1-7, resnet152-v2-7, resnet50-caffe2-v1-8, resnet18-v1-7, resnet18-v2-7, resnet34-v2-7, resnet50-v2-7, resnet101-v2-7, resnet152-v1-7, resnet50-v1-12, efficientnet-lite4-11, bvlcalexnet-9, bvlcalexnet-7, bvlcalexnet-12, bvlcalexnet-6, bvlcalexnet-8, vgg19-caffe2-7, vgg16-bn-7, vgg19-caffe2-9, vgg16-12, vgg19-caffe2-6, vgg19-caffe2-8, shufflenet-v2-10, shufflenet-v2-12, shufflenet-9, shufflenet-6, shufflenet-7, shufflenet-8, yolov3-10, yolov4, retinanet-9, tinyyolov2-7, tinyyolov2-8, tiny-yolov3-11, udnie-9, pointilism-9, mosaic-9, udnie-8, candy-8, pointilism-8, rain-princess-8, rain-princess-9, mosaic-8, candy-9
@tungld can we filter out the models that are too old? Presumably some of the models that don't compile are also because there are data types we don't handle? Ideally, we would have a way to have label for each benchmarks (e.g. opset, use fp16, ... ) and then we can pull a set that has/has not certain characteristics on a per test machine architecture basis
Results when filtering out old models and int models:
There are 155 models in the ONNX model zoo where 31 models are not checked because of old opsets or quantization.
124 models tested: mnist-7, bvlcalexnet-9, caffenet-8, mosaic-9, yolov3-12, squeezenet1.0-12, vgg16-12, bvlcalexnet-8, bertsquad-12, MaskRCNN-12, udnie-8, inception-v2-8, shufflenet-7, zfnet512-7, googlenet-7, resnet101-v1-7, ssd_mobilenet_v1_10, densenet-12, arcfaceresnet100-8, MaskRCNN-10, rcnn-ilsvrc13-7, roberta-base-11, candy-8, resnet18-v2-7, emotion-ferplus-8, tiny-yolov3-11, pointilism-9, googlenet-9, resnet50-v2-7, inception-v1-8, shufflenet-6, tinyyolov2-7, ResNet101-DUC-7, caffenet-9, t5-encoder-12, t5-decoder-with-lm-head-12, squeezenet1.0-8, inception-v1-12, fcn-resnet50-12, inception-v1-6, ssd_mobilenet_v1_12, inception-v1-7, resnet18-v1-7, gpt2-10, zfnet512-6, rain-princess-8, ssd-12, resnet50-v1-7, squeezenet1.0-6, resnet34-v2-7, resnet50-caffe2-v1-7, vgg16-bn-7, efficientnet-lite4-11, mnist-8, ssd-10, zfnet512-9, bertsquad-10, yolov3-10, vgg16-7, inception-v1-9, shufflenet-v2-12, resnet50-caffe2-v1-8, resnet101-v2-7, rcnn-ilsvrc13-8, mobilenetv2-12, tinyyolov2-8, resnet152-v1-7, bvlcalexnet-7, inception-v2-6, squeezenet1.0-7, bvlcalexnet-6, resnet34-v1-7, gpt2-lm-head-10, densenet-8, resnet50-caffe2-v1-9, emotion-ferplus-7, mosaic-8, shufflenet-9, inception-v2-7, vgg19-7, rain-princess-9, googlenet-6, googlenet-8, caffenet-7, resnet50-v1-12, retinanet-9, super-resolution-10, roberta-sequence-classification-9, vgg19-caffe2-8, zfnet512-8, zfnet512-12, udnie-9, googlenet-12, FasterRCNN-12, mobilenetv2-7, squeezenet1.0-9, shufflenet-8, bertsquad-8, fcn-resnet50-11, googlenet-3, yolov4, rcnn-ilsvrc13-9, bidaf-9, fcn-resnet101-11, FasterRCNN-10, densenet-9, vgg19-caffe2-6, resnet50-caffe2-v1-6, vgg19-caffe2-9, squeezenet1.0-3, bvlcalexnet-12, inception-v2-9, caffenet-6, pointilism-8, densenet-6, shufflenet-v2-10, vgg19-caffe2-7, rcnn-ilsvrc13-6, resnet152-v2-7, squeezenet1.1-7, densenet-7, candy-9, vgg19-bn-7, caffenet-12
102 models passed: mnist-7, bvlcalexnet-9, caffenet-8, mosaic-9, yolov3-12, squeezenet1.0-12, vgg16-12, bvlcalexnet-8, bertsquad-12, udnie-8, shufflenet-7, inception-v2-8, zfnet512-7, googlenet-7, resnet101-v1-7, densenet-12, arcfaceresnet100-8, rcnn-ilsvrc13-7, roberta-base-11, candy-8, resnet18-v2-7, emotion-ferplus-8, tiny-yolov3-11, pointilism-9, googlenet-9, resnet50-v2-7, inception-v1-8, shufflenet-6, tinyyolov2-7, caffenet-9, squeezenet1.0-8, inception-v1-12, inception-v1-6, inception-v1-7, resnet18-v1-7, gpt2-10, rain-princess-8, resnet50-v1-7, squeezenet1.0-6, resnet34-v2-7, resnet50-caffe2-v1-7, vgg16-bn-7, efficientnet-lite4-11, mnist-8, zfnet512-9, bertsquad-10, yolov3-10, inception-v1-9, shufflenet-v2-12, resnet50-caffe2-v1-8, resnet101-v2-7, rcnn-ilsvrc13-8, tinyyolov2-8, resnet152-v1-7, bvlcalexnet-7, squeezenet1.0-7, bvlcalexnet-6, resnet34-v1-7, gpt2-lm-head-10, densenet-8, resnet50-caffe2-v1-9, emotion-ferplus-7, mosaic-8, shufflenet-9, inception-v2-7, rain-princess-9, googlenet-6, googlenet-8, caffenet-7, resnet50-v1-12, retinanet-9, super-resolution-10, roberta-sequence-classification-9, vgg19-caffe2-8, zfnet512-8, zfnet512-12, udnie-9, googlenet-12, mobilenetv2-7, squeezenet1.0-9, shufflenet-8, googlenet-3, yolov4, rcnn-ilsvrc13-9, densenet-9, vgg19-caffe2-6, resnet50-caffe2-v1-6, vgg19-caffe2-9, squeezenet1.0-3, bvlcalexnet-12, inception-v2-9, caffenet-6, pointilism-8, densenet-6, shufflenet-v2-10, vgg19-caffe2-7, rcnn-ilsvrc13-6, resnet152-v2-7, squeezenet1.1-7, densenet-7, candy-9, caffenet-12
22 model failed: fcn-resnet50-12, ssd_mobilenet_v1_12, bidaf-9, fcn-resnet101-11, FasterRCNN-10, zfnet512-6, ssd-12, MaskRCNN-12, ssd-10, ssd_mobilenet_v1_10, vgg16-7, MaskRCNN-10, mobilenetv2-12, inception-v2-6, FasterRCNN-12, vgg19-bn-7, ResNet101-DUC-7, bertsquad-8, vgg19-7, fcn-resnet50-11, t5-encoder-12, t5-decoder-with-lm-head-12
For some of the failures like T5, I've got the ability to help us move them over to successful for our users. Once I find some time I'm going to make a data prep script based off the onnxt5 benchmark notebook to give the community good data prepared by onnxruntime.
For some of the failures like T5, I've got the ability to help us move them over to successful for our users. Once I find some time I'm going to make a data prep script based off the onnxt5 benchmark notebook to give the community good data prepared by onnxruntime.
Great. Thanks!
I am closing this memo because now we can see a live status on the homepage of https://github.com/onnx/onnx-mlir.
This issue is meant as an ongoing discussion about the onnx-mlir coverage of the ONNX model zoo and any other models of interest. Some of the models we have tried and issues found are below.
Supported: -[x] MNIST -[x] ResNet
In progress: -[] ShuffleNet: slight result inconsistency being investigated but all operations are supported
Missing Ops: -[] DenseNet: missing GlobalAveragePool operation -[] AlexNet: missing LRN operation -[] SqueezeNet: missing Dropout operation -[] CaffeNet: missing LRN operation
Errors: bertsquad8:
bidaf: