Closed fxmarty closed 1 year ago
Hi. I'm using the cli command:
optimum-cli export onnx --model openai/whisper-medium model/
and getting the same error:
ValueError: This protobuf of onnx model is too large (>2GB). Call check_model with model path instead.
Environment:
Optimum version tested:
1.8.2
1.8.3.dev0
@vilsonrodrigues This is fixed on main, thanks for notifying!
Thanks @fxmarty!!
Dear @fxmarty
I get a similar error using these versions:
optimum version: 1.8.7 transformers version: 4.29.2 Platform: Windows-10-10.0.22621-SP0 Python version: 3.11.4 Huggingface_hub version: 0.15.1 PyTorch version (GPU?): 2.1.0.dev20230611+cu121 (cuda availabe: True) Tensorflow version (GPU?): not installed (cuda availabe: NA)
optimum-cli export onnx --model stabilityai/stablelm-tuned-alpha-7b stablelm-tuned-alpha-7b_onnx/
ERROR: Detailed error: Unable to merge decoders. Detailed error: Data of TensorProto ( tensor name: gpt_neox.embed_in.weight_merged_0) should be stored in decoder_model_merged.onnx_data, but it doesn't exist or is not accessible.
Thanks, tracked in https://github.com/huggingface/optimum/issues/1044
System Info
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
optimum-cli export onnx --model gpt2-large gpt2_onnx
Traceback:
Expected behavior
no error