Open zjgarvey opened 2 months ago
@zjgarvey added https://github.com/llvm/torch-mlir/issues/3647 to some of the models as we need that along with https://github.com/iree-org/iree/issues/18229
cc @lialan as well. Can you co-ordinate with Zach to track CPU codegen issues.
Also adding https://github.com/llvm/torch-mlir/issues/3651 that needs to be done for supporting broad range of models.
This issue will be used to track compilation failures for migraphx models on CPU and GPU. Compile failures for each model should have a link to an issue with a smaller reproducer in the notes column.
Notes:
migraphx_ORT__bert_base_cased_1
fails on CPU but passes on GPU. Other adjacent models fail for similar reasons on both. Very odd.migraphx_sdxl__unet__model
,migraphx_ORT__bert_large_uncased_1
because they cause a crash (likely OOM)CPU Status Table
The Following report was generated with IREE compiler version iree-org/iree@caacf6c8015b4344b2d9b4a82c2fddc015693831 Torch-mlir version llvm/torch-mlir@2665ed343b19713ba5c1c555b2366a93de8b9d2b
Passing Summary
Fail Summary
Test Run Detail
Test was run with the following arguments: Namespace(device='local-task', backend='llvm-cpu', iree_compile_args=None, mode='cl-onnx-iree', torchtolinalg=True, stages=None, skip_stages=None, benchmark=False, load_inputs=False, groups='all', test_filter='migraphx', testsfile=None, tolerance=None, verbose=True, rundirectory='test-run', no_artifacts=False, cleanup='0', report=True, report_file='mi_10_10.md')
OLD STATUS (Will update and migrate issues to current table)
GPU Status Table
last generated with pip installed iree tools at version
Summary
Test Run Detail
Test was run with the following arguments: Namespace(device='hip://1', backend='rocm', iree_compile_args=['iree-hip-target=gfx942'], mode='onnx-iree', torchtolinalg=False, stages=None, skip_stages=None, load_inputs=False, groups='all', test_filter='migraphx', tolerance=None, verbose=True, rundirectory='test-run', no_artifacts=False, report=True, report_file='9_3_migraphx.md')
Note: GPU missing sd model (runs out of memory and kills the test). Probably happening during native inference, so it might need some looking into.
Performance data with iree-benchmark-module on GPU
Summary
Test Run Detail
Test was run with the following arguments: Namespace(device='local-task', backend='llvm-cpu', iree_compile_args=None, mode='cl-onnx-iree', torchtolinalg=False, stages=None, skip_stages=None, benchmark=True, load_inputs=False, groups='all', test_filter='migraphx', testsfile=None, tolerance=None, verbose=True, rundirectory='test-run', no_artifacts=False, cleanup='0', report=True, report_file='report.md')