wangzhaode / llm-export

llm-export can export llm model to onnx.
Apache License 2.0
203 stars 21 forks source link

Qwen2-7B-Instruct 转 MNN 报错:google.protobuf.message.DecodeError: Error parsing message #49

Open pcyan3166 opened 3 months ago

pcyan3166 commented 3 months ago

windows 11,python 版本 3.9.19

其他依賴版本: (llmexport) PS C:\Users\yanpe\work\mnn\llm-export> conda list packages in environment at C:\tools\Miniconda3\envs\llmexport:

Name Version Build Channel accelerate 0.31.0 pypi_0 pypi ca-certificates 2024.3.11 haa95532_0 certifi 2024.6.2 pypi_0 pypi charset-normalizer 3.3.2 pypi_0 pypi colorama 0.4.6 pypi_0 pypi coloredlogs 15.0.1 pypi_0 pypi filelock 3.15.4 pypi_0 pypi flatbuffers 24.3.25 pypi_0 pypi fsspec 2024.6.1 pypi_0 pypi huggingface-hub 0.23.4 pypi_0 pypi humanfriendly 10.0 pypi_0 pypi idna 3.7 pypi_0 pypi jinja2 3.1.4 pypi_0 pypi libffi 3.4.4 hd77b12b_1 markupsafe 2.1.5 pypi_0 pypi mnn 2.8.1 pypi_0 pypi mpmath 1.3.0 pypi_0 pypi networkx 3.2.1 pypi_0 pypi numpy 1.25.2 pypi_0 pypi onnx 1.16.1 pypi_0 pypi onnxruntime 1.15.1 pypi_0 pypi onnxslim 0.1.31 pypi_0 pypi openssl 3.0.14 h827c3e9_0 packaging 24.1 pypi_0 pypi peft 0.11.1 pypi_0 pypi pip 24.0 py39haa95532_0 protobuf 5.27.2 pypi_0 pypi psutil 6.0.0 pypi_0 pypi pyreadline3 3.4.1 pypi_0 pypi python 3.9.19 h1aa4202_1 pyyaml 6.0.1 pypi_0 pypi regex 2024.5.15 pypi_0 pypi requests 2.32.3 pypi_0 pypi safetensors 0.4.3 pypi_0 pypi sentencepiece 0.1.99 pypi_0 pypi setuptools 69.5.1 py39haa95532_0 sqlite 3.45.3 h2bbff1b_0 sympy 1.12.1 pypi_0 pypi tokenizers 0.15.2 pypi_0 pypi torch 2.0.1 pypi_0 pypi tqdm 4.66.4 pypi_0 pypi transformers 4.37.0 pypi_0 pypi transformers-stream-generator 0.0.4 pypi_0 pypi typing-extensions 4.12.2 pypi_0 pypi tzdata 2024a h04d1e81_0 urllib3 2.2.2 pypi_0 pypi vc 14.2 h2eaa2aa_4 vs2015_runtime 14.29.30133 h43f2093_4 wheel 0.43.0 py39haa95532_0

(llmexport) PS C:\Users\yanpe\work\mnn\llm-export> python .\llm_export.py --type Qwen2-7B-Instruct --path ..\Qwen2-7B-Instruct\ --export_split --export_token --export_mnn --onnx_path ..\qw2onnx\ --mnn_path ..\qw2mnn\ No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA' The device support i8sdot:0, support fp16:0, support i8mm: 0 Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:19<00:00, 4.83s/it] ============== Diagnostic Run torch.onnx.export version 2.0.1+cpu ============== verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Traceback (most recent call last): File "C:\Users\yanpe\work\mnn\llm-export\llm_export.py", line 1420, in llm_exporter.export_embed() File "C:\Users\yanpe\work\mnn\llm-export\llm_export.py", line 310, in export_embed slim(onnx_model, output_model=onnx_model) File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnxslim\cli_main.py", line 128, in slim model = optimize(model, skip_fusion_patterns) File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnxslim\core\slim.py", line 111, in optimize model = optimize_model(graph, skip_fusion_patterns) File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnxslim\core\optimizer.py", line 895, in optimize_model model = gs.export_onnx(graph) File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnxslim\onnx_graphsurgeon\exporters\onnx_exporter.py", line 358, in export_onnx onnx_graph = OnnxExporter.export_graph( File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnxslim\onnx_graphsurgeon\exporters\onnx_exporter.py", line 312, in export_graph return onnx.helper.make_graph( File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnx\helper.py", line 234, in make_graph graph.initializer.extend(initializer) google.protobuf.message.DecodeError: Error parsing message

inisis commented 3 months ago

First, I think the onnx and onnxruntime version is mismatched, you can upgrade your onnxruntime to 1.18 in order to ust onnx 1.16, second, your protobuf version is too high, I recommend using version 4.25.3

pcyan3166 commented 3 months ago

First, I think the onnx and onnxruntime version is mismatched, you can upgrade your onnxruntime to 1.18 in order to ust onnx 1.16, second, your protobuf version is too high, I recommend using version 4.25.3

thanks for your reply.

still failed

(llmexport) PS C:\Users\yanpe\work\mnn\llm-export> conda list packages in environment at C:\tools\Miniconda3\envs\llmexport:

Name Version Build Channel accelerate 0.31.0 pypi_0 pypi ca-certificates 2024.3.11 haa95532_0 certifi 2024.6.2 pypi_0 pypi charset-normalizer 3.3.2 pypi_0 pypi colorama 0.4.6 pypi_0 pypi coloredlogs 15.0.1 pypi_0 pypi filelock 3.15.4 pypi_0 pypi flatbuffers 24.3.25 pypi_0 pypi fsspec 2024.6.1 pypi_0 pypi huggingface-hub 0.23.4 pypi_0 pypi humanfriendly 10.0 pypi_0 pypi idna 3.7 pypi_0 pypi jinja2 3.1.4 pypi_0 pypi libffi 3.4.4 hd77b12b_1 markupsafe 2.1.5 pypi_0 pypi mnn 2.8.3 pypi_0 pypi mpmath 1.3.0 pypi_0 pypi networkx 3.2.1 pypi_0 pypi numpy 1.25.2 pypi_0 pypi onnx 1.16.1 pypi_0 pypi onnxruntime 1.18.1 pypi_0 pypi onnxslim 0.1.31 pypi_0 pypi openssl 3.0.14 h827c3e9_0 packaging 24.1 pypi_0 pypi peft 0.11.1 pypi_0 pypi pip 24.0 py39haa95532_0 protobuf 4.25.3 pypi_0 pypi psutil 6.0.0 pypi_0 pypi pyreadline3 3.4.1 pypi_0 pypi python 3.9.19 h1aa4202_1 pyyaml 6.0.1 pypi_0 pypi regex 2024.5.15 pypi_0 pypi requests 2.32.3 pypi_0 pypi safetensors 0.4.3 pypi_0 pypi sentencepiece 0.1.99 pypi_0 pypi setuptools 69.5.1 py39haa95532_0 sqlite 3.45.3 h2bbff1b_0 sympy 1.12.1 pypi_0 pypi tokenizers 0.15.2 pypi_0 pypi torch 2.0.1 pypi_0 pypi tqdm 4.66.4 pypi_0 pypi transformers 4.37.0 pypi_0 pypi transformers-stream-generator 0.0.4 pypi_0 pypi typing-extensions 4.12.2 pypi_0 pypi tzdata 2024a h04d1e81_0 urllib3 2.2.2 pypi_0 pypi vc 14.2 h2eaa2aa_4 vs2015_runtime 14.29.30133 h43f2093_4 wheel 0.43.0 py39haa95532_0

(llmexport) PS C:\Users\yanpe\work\mnn\llm-export> python .\llm_export.py --type Qwen2-7B-Instruct --path ..\Qwen2-7B-Instruct\ --export_split --export_token --export_mnn --onnx_path ..\qw2onnx\ --mnn_path ..\qw2mnn No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA' The device support i8sdot:0, support fp16:0, support i8mm: 0 Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:17<00:00, 4.46s/it] ============== Diagnostic Run torch.onnx.export version 2.0.1+cpu ============== verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Traceback (most recent call last): File "C:\Users\yanpe\work\mnn\llm-export\llm_export.py", line 1420, in llm_exporter.export_embed() File "C:\Users\yanpe\work\mnn\llm-export\llm_export.py", line 310, in export_embed slim(onnx_model, output_model=onnx_model) File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnxslim\cli_main.py", line 128, in slim model = optimize(model, skip_fusion_patterns) File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnxslim\core\slim.py", line 111, in optimize model = optimize_model(graph, skip_fusion_patterns) File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnxslim\core\optimizer.py", line 895, in optimize_model model = gs.export_onnx(graph) File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnxslim\onnx_graphsurgeon\exporters\onnx_exporter.py", line 358, in export_onnx onnx_graph = OnnxExporter.export_graph( File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnxslim\onnx_graphsurgeon\exporters\onnx_exporter.py", line 312, in export_graph return onnx.helper.make_graph( File "C:\tools\Miniconda3\envs\llmexport\lib\site-packages\onnx\helper.py", line 234, in make_graph graph.initializer.extend(initializer) google.protobuf.message.DecodeError: Error parsing message