nod-ai / SHARK-TestSuite

Temporary home of a test suite we are evaluating
Apache License 2.0
2 stars 23 forks source link

ConvNeXt_vaiq_int8 onnx model support #259

Closed AmosLewis closed 2 weeks ago

AmosLewis commented 3 weeks ago

python ./run.py --torchmlirbuild ../../torch-mlir/build --tolerance 0.001 0.001 --cachedir ./huggingface_cache --ireebuild ../../iree-build -f onnx -g models --mode onnx --report --torchtolinalg --tests onnx/models/ConvNeXt_vaiq_int8

tests model-run onnx-import torch-mlir iree-compile inference
onnx/models/ConvNeXt_vaiq_int8 passed passed passed failed notrun
iree candidate-20240610.920

torch-mlir 
commit 7e0e23c66820d1db548103acbdf1337f701dc5a3 (upstream/main)
Author: Sambhav Jain <sambhav.jain@getcruise.com>
Date:   Sun Jun 9 00:32:49 2024 -0700

    Test custom op import with symbolic shapes (#3431)

    Tests the basic constructs of registering a custom op and its abstract
    implementations (with FakeTensors) in python, going through TorchDynamo
    export, followed by importing the shape expressions in the Torch
    dialect.

    Also fixes the importer were previously the symbolic bind op insertion
    was not gated in one place.

commands.log

PYTHONPATH=/home/chi/src/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir python runmodel.py  --todtype default --mode onnx --outfileprefix ConvNeXt_vaiq_int8 1> model-run.log 2>&1
PYTHONPATH=/home/chi/src/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir python -m torch_mlir.tools.import_onnx /home/chi/src/SHARK-TestSuite/e2eshark/onnx/models/ConvNeXt_vaiq_int8/model.onnx -o ConvNeXt_vaiq_int8.default.torch-onnx.mlir 1> onnx-import.log 2>&1
/home/chi/src/torch-mlir/build/bin/torch-mlir-opt -pass-pipeline='builtin.module(func.func(convert-torch-onnx-to-torch),torch-lower-to-backend-contract,func.func(cse,canonicalize),torch-backend-to-linalg-on-tensors-backend-pipeline)' ConvNeXt_vaiq_int8.default.torch-onnx.mlir > ConvNeXt_vaiq_int8.default.onnx.linalg.mlir 2>torch-mlir.log
/home/chi/src/iree-build/tools/iree-compile --iree-input-demote-i64-to-i32 --iree-hal-target-backends=llvm-cpu  ConvNeXt_vaiq_int8.default.onnx.linalg.mlir > ConvNeXt_vaiq_int8.default.vmfb 2>iree-compile.log

iree-compile.log

failed to translate executables
failed to translate executables
ConvNeXt_vaiq_int8.default.onnx.linalg.mlir:979:12: error: 'func.func' op exceeded stack allocation limit of 32768 bytes for function. Got 401408 bytes
    %106 = linalg.generic {indexing_maps = [#map, #map1], iterator_types = ["parallel", "parallel", "parallel", "parallel"]} ins(%105 : tensor<1x56x56x512xf32>) outs(%98 : tensor<1x56x56x512xi8>) {
           ^
ConvNeXt_vaiq_int8.default.onnx.linalg.mlir:24:3: note: called from
  func.func @torch_jit(%arg0: tensor<1x3x224x224xf32>) -> tensor<1x1000xf32> {

ConvNeXt_vaiq_int8.default.onnx.linalg.elide.mlir

AmosLewis commented 2 weeks ago

https://github.com/nod-ai/e2eshark-reports/blob/main/2024-06-24/onnx_reports/statusreport.md fixed