Open Aiurus opened 2 months ago
Can you try again by removing the "evaluator": "common_evaluator"
from the template. There might be an issue with the evaluator but it is not required.
If it still fails, please share the full log from the run.
Hi, Jambay. I removed "evaluator": "common_evaluator" from the template, and it works well. This is the model architecture built from this config.
I want to remove "Unsqueeze" layer from this model. How to do this? please help me.
We don't provide an option to remove this mode. It was added by onnxruntime-extensions in this PR https://github.com/microsoft/onnxruntime-extensions/pull/681
please install the previous version of onnxruntime-extensions 0.10.1 and rerun the workflow. you can add "clean_run_cache" : true
at the same level as https://github.com/microsoft/Olive/blob/80e1fa9fe97b655a451ea9364f7f6a5794cd74e7/examples/whisper/whisper_template.json#L105 to only rerun this pass.
Describe the bug I tried to optimize whisper-tiny.en model without audio_decoder, but error occurred.
To Reproduce
Expected behavior When I tried with audio_decoder, the code works well.
Olive logs [olive_evaluator.py:236:generate_metric_user_config_with_model_io] Model input shapes are not static. Cannot use inferred input shapes for creating dummy data. This will cause an error when creating dummy data for tuning.
Other information