getao / icae

The repo for In-context Autoencoder
Creative Commons Zero v1.0 Universal
85 stars 6 forks source link

How to run the Llama-2-7b-chat model #20

Open chenchenchen77 opened 4 weeks ago

chenchenchen77 commented 4 weeks ago

HI! Thank you great work, I want to run the llama model but when I run v2, have a bug (Mistral can run success,I think maybe is the reason about .pt and .safetensors) Traceback (most recent call last): File "/data/test_baseline/baseline/icae/icae/code/icae_v2/fine_tuned_inference.py", line 33, in state_dict = load_file(training_args.output_dir) File "/data/miniconda3/envs/ultragist/lib/python3.10/site-packages/safetensors/torch.py", line 313, in load_file with safe_open(filename, framework="pt", device=device) as f: safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

Can you tell me how can solve this question ,thank you!

KpKqwq commented 4 days ago

HI! Thank you great work, I want to run the llama model but when I run v2, have a bug (Mistral can run success,I think maybe is the reason about .pt and .safetensors) Traceback (most recent call last): File "/data/test_baseline/baseline/icae/icae/code/icae_v2/fine_tuned_inference.py", line 33, in state_dict = load_file(training_args.output_dir) File "/data/miniconda3/envs/ultragist/lib/python3.10/site-packages/safetensors/torch.py", line 313, in load_file with safe_open(filename, framework="pt", device=device) as f: safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

Can you tell me how can solve this question ,thank you!

Have you solved the problem?