SafeAILab / EAGLE

Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)
https://arxiv.org/pdf/2406.16858
Apache License 2.0
780 stars 79 forks source link

runtime #23

Closed qspang closed 7 months ago

qspang commented 8 months ago

IRM%7{I%K_1A~NX~FQY{AUR how to solve this problem?I completely cloned the project and then transferred it to it, and then directly followed this evaluation without changing the code. Then the models were downloaded from hf. If I say changes, they are targeted changes to the runtime errors (after all, they are also changed. to run successfully (if I don’t change it, an error will still be reported),

Liyuhui-12 commented 8 months ago

This error indicates that a nan occurred during the initial forward pass (prefill) of the base model, which seems to be unrelated to EAGLE. Could you provide more details about your experimental setup, such as the command you ran?

qspang commented 8 months ago

034e30a26e1cd87e3412b61b2761e8cThe above is my running command.Among them, llama2-7b-chat is downloaded from the meta official license, and the weight model is downloaded from the link of the project yuhuili/EAGLE-llama2-chat-7B, and according to the project requirements:pip install -r requirements.The graphics card I use is NVIDIA 3090.But I did not go through the training step in the project, but wanted to directly test the acceleration effect through the weights you gave.

qspang commented 8 months ago

When I used llama2-7b-chat-hf instead of llama2-7b-chat, I was surprised to find that it could run successfully, but vicuna-7b-v1.3 still failed. I guess the vicuna-7b-v1.3 format is not compatible with the huggingface format. Because we use transformers lib, a model weight compatible with transformers lib.I guess it is necessary to convert vicuna-7b-chat into a hugging face form like llama2-7b-chat-hf.

Liyuhui-12 commented 8 months ago

Sure, in order to use Huggingface.transformers, you must use the -hf weights.

qspang commented 8 months ago

But the vicuna-7b-chat I used was downloaded from hugging-face! ! ! Link: https://huggingface.co/lmsys/vicuna-7b-v1.3

Liyuhui-12 commented 8 months ago

Are you still encountering the same error when using vicuna-7b-chat? Can you generate normally using Huggingface's generate function?

qspang commented 8 months ago

When I used llama2-7b-chat-hf instead of llama2-7b-chat, I was surprised to find that it could run successfully, but vicuna-7b-v1.3 still failed. I guess the vicuna-7b-v1.3 format is not compatible with the huggingface format. Because we use transformers lib, a model weight compatible with transformers lib.I guess it is necessary to convert vicuna-7b-chat into a hugging face form like llama2-7b-chat-hf.

I still can't run your EAGLE project normally using vicuna-7b-chat. I haven't tried using Huggingface's generate function yet.

qspang commented 8 months ago

Can you now use vicuna-7b-chat to run your EAGLE project normally?

Liyuhui-12 commented 8 months ago

I can run it normally.