onnx / optimizer

Actively maintained ONNX Optimizer
Apache License 2.0
650 stars 90 forks source link

Optimizer return an empty model #144

Open pd-intra opened 1 year ago

pd-intra commented 1 year ago

Hello,

I have an issue using this command line python -m onnxoptimizer "model.onnx" "model_opti.onnx"

The script fail at _onnx.checker.check_model(outputfile) in file onnxoptimizer_main.py line 85 with the error : onnx.onnx_cpp2py_export.checker.ValidationError: model with IR version >= 3 must specify opset_import for ONNX

When i check what's in the model file, i only find ir_version: 8 producer_name: "pytorch" producer_version: "2.1.0" graph { }

Now more context

i took the model "togethercomputer/LLaMA-2-7B-32K" in huggingface convert with the following commands

from transformers import AutoTokenizer, LlamaForCausalLM

CACHE_DIR = r".\cache_dir"

tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K", cache_dir=CACHE_DIR) model = LlamaForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K", cache_dir=CACHE_DIR, use_safetensors = False)

prompt = "test" inputs = tokenizer(prompt, return_tensors="pt")

input_names = ["input_ids"] output_names = ["output"] torch.onnx.export(model, inputs.input_ids, r'.\model\llama2_32k_with_weight.onnx',export_params=True, input_names=input_names, output_names=output_names, dynamic_axes={'input_ids' : {1 : 'context_length'}, 'output' : {1 : 'context_length'}})

`from transformers import AutoTokenizer, LlamaForCausalLM

CACHE_DIR = r".\cache_dir"

tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K", cache_dir=CACHE_DIR) model = LlamaForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K", cache_dir=CACHE_DIR, use_safetensors = False)

prompt = "test" inputs = tokenizer(prompt, return_tensors="pt")

input_names = ["input_ids"] output_names = ["output"] torch.onnx.export(model, inputs.input_ids, r'.\model\llama2_32k_with_weight.onnx',export_params=True, input_names=input_names, output_names=output_names, dynamic_axes={'input_ids' : {1 : 'context_length'}, 'output' : {1 : 'context_length'}})`from transformers import AutoTokenizer, LlamaForCausalLM

CACHE_DIR = r".\cache_dir"

tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K", cache_dir=CACHE_DIR) model = LlamaForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K", cache_dir=CACHE_DIR, use_safetensors = False)

prompt = "test" inputs = tokenizer(prompt, return_tensors="pt")

input_names = ["input_ids"] output_names = ["output"] torch.onnx.export(model, inputs.input_ids, r'.\model\llama2_32k_with_weight.onnx',export_params=True, input_names=input_names, output_names=output_names, dynamic_axes={'input_ids' : {1 : 'context_length'}, 'output' : {1 : 'context_length'}})

After that i use :

python -m onnxoptimizer "llama2_32k_with_weight.onnx" "model_opti.onnx"

And it fail

I tried to check where the optimizer fail exactly

in onnxoptimizer\init

model_str = model.SerializeToString() length of model_str is 26988224572

optimized_model_str =C.optimize(model_str, passes) length of model_str is 20

If someone have an idea Thank in advance, I use your package for a long time and it's the first time i encounter a problem, love your work

jibinghu commented 3 months ago

hey, b,ro, have the trouble solved?