We are currently running 1,000 models nightly in the CI on the cpu and mi300 backends.
The two main items that need to be accomplished to push to add in the Hugging face non-CNN and MIGraphX models from the tracker here (https://github.com/nod-ai/SHARK-Turbine/issues/564):
Add CI configuration to run the two new categories of models. We are limited to 8 shards of models at the moment so rework how exactly we want to shard the models now that we are adding over 1200 more models
A blocker for this is the torch-mlir release that is currently breaking https://github.com/llvm/torch-mlir-release/actions. We need this passing again so that the CI can pick up the latest torch-mlir version which is required in alt_e2eshark as it uses import/lowering from this project
We are currently running 1,000 models nightly in the CI on the cpu and mi300 backends. The two main items that need to be accomplished to push to add in the Hugging face non-CNN and MIGraphX models from the tracker here (https://github.com/nod-ai/SHARK-Turbine/issues/564):
Port CI over to using alt_e2eshark (I have initial work ready here for this: https://github.com/nod-ai/SHARK-TestSuite/tree/alt-merge-reports/alt_e2eshark). The main reason for this port is because alt_e2eshark allows better configuration for different sources of models, external weights, and dynamic dims.
Add CI configuration to run the two new categories of models. We are limited to 8 shards of models at the moment so rework how exactly we want to shard the models now that we are adding over 1200 more models
A blocker for this is the torch-mlir release that is currently breaking https://github.com/llvm/torch-mlir-release/actions. We need this passing again so that the CI can pick up the latest torch-mlir version which is required in alt_e2eshark as it uses import/lowering from this project