yxli2123 / LoftQ

MIT License
180 stars 15 forks source link

bugs for running python test_gsm8k.py when uses LoftQ for llama #19

Closed Rain-yj closed 2 months ago

Rain-yj commented 3 months ago

python test_gsm8k.py --model_name_or_path /rhome/yangyj/pre-train/models--LoftQ--Llama-2-7b-hf-4bit-64rank/snapshots/1bb66ebf4f9050bc619f416a4f3327a21426fc6f --batch_size 16

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

bin /rhome/yangyj/anaconda3/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so CUDA SETUP: CUDA runtime path found: /rhome/yangyj/anaconda3/lib/libcudart.so.11.0 CUDA SETUP: Highest compute capability among GPUs detected: 8.6 CUDA SETUP: Detected CUDA version 117 CUDA SETUP: Loading binary /rhome/yangyj/anaconda3/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117.so... WARNING:root:Use the checkpoint in HF hub, stored in the subfolder='gsm8k' in target model. Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:40<00:00, 13.66s/it] Traceback (most recent call last): File "/rhome/yangyj/anaconda3/lib/python3.10/site-packages/peft/utils/config.py", line 177, in _get_peft_type config_file = hf_hub_download( File "/rhome/yangyj/anaconda3/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn validate_repo_id(arg_value) File "/rhome/yangyj/anaconda3/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/rhome/yangyj/pre-train/models--LoftQ--Llama-2-7b-hf-4bit-64rank/snapshots/1bb66ebf4f9050bc619f416a4f3327a21426fc6f'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/rhome/yangyj/LoftQ-main/test_gsm8k.py", line 281, in evaluation(model_args, data_args) File "/rhome/yangyj/LoftQ-main/test_gsm8k.py", line 128, in evaluation model = PeftModel.from_pretrained(model, File "/rhome/yangyj/anaconda3/lib/python3.10/site-packages/peft/peft_model.py", line 244, in from_pretrained PeftConfig._get_peft_type( File "/rhome/yangyj/anaconda3/lib/python3.10/site-packages/peft/utils/config.py", line 183, in _get_peft_type raise ValueError(f"Can't find '{CONFIG_NAME}' at '{model_id}'") ValueError: Can't find 'adapter_config.json' at '/rhome/yangyj/pre-train/models--LoftQ--Llama-2-7b-hf-4bit-64rank/snapshots/1bb66ebf4f9050bc619f416a4f3327a21426fc6f'

yxli2123 commented 3 months ago

Hi @Rain-yj, thanks for using our repo. It seems your model checkpoint path isn't correct.

If you try to download the entire model repo, i.e., LoftQ/Llama-2-7b-hf-4bit-64rank in HuggingFace, please follow the git clone operation (see below), instead of running AutoModel.from_pretrained() and locating where the cache files are.

# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install

# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/LoftQ/Llama-2-7b-hf-4bit-64rank

Then you can set model_name_or_path to your local path where you just downloaded the model repo.

Let me know if you have further questions, or we can close the issue.

Rain-yj commented 3 months ago

Thank you very much for your response, thank you!