microsoft / Llama-2-Onnx

Other
1.03k stars 95 forks source link

failed:Protobuf parsing failed #19

Open zren18 opened 1 year ago

zren18 commented 1 year ago

When I try to run " python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx --embedding_file 7B_FT_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?" " it returns that “onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx failed:Protobuf parsing failed.” And I try onnx.checker.check_model() it returns that onnx.onnx_cpp2py_export.checker.ValidationError: Unable to parse proto from file: /data/renzhen/Llama-2-Onnx/7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx. Please check if it is a valid protobuf file of proto.

JoshuaElsdon commented 1 year ago

Hello, I have tried the command as listed, it works correctly on my end. Could you provide some details about what version of ONNX you are using, and what operating system you are using?

adarshxs commented 1 year ago

I get the same error

Linux 23fe32d5f4bf 5.4.0-72-generic #80-Ubuntu

CUDA: 12

onnx version: 1.13.0 onnxruntime-gpu version: 1.15.1

root@23fe32d5f4bf:/workspace/Llama-2-Onnx# python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_float16/ONNX/LlamaV2_7B_float16.onnx --embedding_file 7B_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"

/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:65: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names. Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
warnings.warn()

Traceback (most recent call last):
  File "MinimumExample/Example_ONNX_LlamaV2.py", line 166, in <module>
    response = run_onnx_llamav2(
  File "MinimumExample/Example_ONNX_LlamaV2.py", line 47, in run_onnx_llamav2
    llm_session = onnxruntime.InferenceSession(
  File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from 7B_float16/ONNX/LlamaV2_7B_float16.onnx failed: Protobuf parsing failed.
Anindyadeep commented 1 year ago

Hello, I am also facing the same problem here. Here is my specs

OS: Pop Os
GPU: NVIDIA GeForce RTX 3060 Mobile / Max-Q (6GB)
Memory: 16 GB

------------------

onnxruntime version: 1.15.1
onnxruntime-gpu version: 1.15.1

I have cloned the repo and also got access for the submodules. Here is the command to I ran

python MinimumExample/Example_ONNX_LlamaV2.py --onnx_file 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx --embedding_file 7B_FT_float16/embeddings.pth --tokenizer_path tokenizer.model --prompt "What is the lightest element?"

Here is the result I got:

"
/home/anindyadeep/anaconda3/envs/llm/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:65: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
  warnings.warn(
Traceback (most recent call last):
  File "/home/anindyadeep/workspace/llama2-onnx/Llama-2-Onnx/MinimumExample/Example_ONNX_LlamaV2.py", line 166, in <module>
    response = run_onnx_llamav2(
  File "/home/anindyadeep/workspace/llama2-onnx/Llama-2-Onnx/MinimumExample/Example_ONNX_LlamaV2.py", line 47, in run_onnx_llamav2
    llm_session = onnxruntime.InferenceSession(
  File "/home/anindyadeep/anaconda3/envs/llm/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/anindyadeep/anaconda3/envs/llm/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 424, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from 7B_FT_float16/ONNX/LlamaV2_7B_FT_float16.onnx failed:Protobuf parsing failed.
adarshxs commented 1 year ago

@Anindyadeep can you check the file size of the submodule you chose? (It may be surprisingly small for model weights)

Try running git lfs pull inside your chosen submodule. That should download the actual model weights instead of the pointers to the weights. The reason is running :

git submodule init <chosen_submodule> 
git submodule update

might not have downloaded the actual model weights but rather pointers to the files stored on lfs

Anindyadeep commented 1 year ago

@adarshxs Thanks for the quick reply. So here's the thing, I have downloaded (updated the git submodule) for folders 7B_float16 and 7B_FT_float16 and it is showing 2.5M for both and for all the empty submodules (ones which I have not updated) is showing 4.0K as the size.

And after that I saw that inside 7B_float16/ONNX it might have all the files like

onnx__MatMul_21420
transformer.block_list.1.proj_norm.weight

(showing two types of files inside the ONNX folder) and I first thought that those were binary files, but I can open those and I saw these

FILE: onnx__MatMul_21420

version https://git-lfs.github.com/spec/v1
oid sha256:d661398b0bb3b10fad9d807e7b6062f9e04ac43db9c9e26bf3641baa7b0d92e8
size 90177536

FILE: transformer.block_list.1.proj_norm.weight

version https://git-lfs.github.com/spec/v1
oid sha256:5540f5f085777ef016b745d27a504c94b51a4813f9c5d1ab8ec609d1afaab6fa
size 8192

Although I got the access, but now it feels like, it has't downloaded the files properly when updated all the submodules.

adarshxs commented 1 year ago

@Anindyadeep Yes I had the same issue. Make sure you have git lfs installed and run the command git lfs pull inside the submodule you want. I suppose these are pointers to the actual weights:

version https://git-lfs.github.com/spec/v1
oid sha256:d661398b0bb3b10fad9d807e7b6062f9e04ac43db9c9e26bf3641baa7b0d92e8
size 90177536

running git lfs pull had the weights downloaded and fixed this issue for me

Anindyadeep commented 1 year ago

Yes, funny part, while I was writing my issue I also got the root cause. So here are my learnings

  1. Make sure git-lfs is installed otherwise though you might get access but it will not download the large files
  2. Make sure protobuf is installed.
  3. Make sure onnxruntime is installed.

And yes that will install everything we need. Thanks @adarshxs for the headstart.