Open avivples opened 1 year ago
i have the same error about the adapter_config,json file - but the first error in the logs points to an unexpected keyword argument "device map"
File "C:\Users\pboe\AppData\Roaming\Python\Python310\site-packages\peft\utils\config.py", line 106, in from_pretrained config_file = hf_hub_download( File "C:\Users\pboe\AppData\Roaming\Python\Python310\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) TypeError: hf_hub_download() got an unexpected keyword argument 'device_map'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\alpacalora\generate.py", line 227, in
I'm seeing the same error as @ushanboe. I suspect that huggingface_hub may have changed the interface to the hf_hub_download(...)
function in the 0.15.0 release on June 1, 2023. I'm going to try installing an older version.
After a bit more digging, I think using peft @ v0.3.0 instead of installing peft from github master may fix this issue. Trying it out now.
It appears running pip install peft==0.3.0
fixes the issue.
Working line of code in peft==0.3.0:
config_file = hf_hub_download(pretrained_model_name_or_path, CONFIG_NAME, subfolder=subfolder)
Not working line of code in peft @ master:
config_file = hf_hub_download(
pretrained_model_name_or_path, CONFIG_NAME, subfolder=subfolder, **kwargs
)
I think the best & quickest fix is to pin peft>=0.3.0,<0.4.0
in requirements.txt
. I'll try to have a PR tomorrow.
@jdeveloperw you are an angel. Was stuck for two days on this. It works
@jdeveloperw @gauravdd kindly help me buddy , i am stuck with this code :
**PEFT_MODEL = "PraveenPandey/mistral_with_peft" config = PeftConfig.from_pretrained(PEFT_MODEL) model = AutoModelForCausalLM.from_pretrained( config.base_model_name_or_path, return_dict=True, quantization_config=bnb_config,
device_map="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) tokenizer.pad_token = tokenizer.eos_token model = PeftModel.from_pretrained(model, PEFT_MODEL)**
getting this error
AttributeError Traceback (most recent call last) File ~/.local/lib/python3.8/site-packages/peft/config.py:143, in PeftConfigMixin.from_pretrained(cls, pretrained_model_name_or_path, subfolder, kwargs) 142 try: --> 143 config_file = hf_hub_download( 144 pretrained_model_name_or_path, CONFIG_NAME, subfolder=subfolder, hf_hub_download_kwargs 145 ) 146 except Exception:
File ~/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118, in validate_hf_hub_args.._inner_fn(*args, *kwargs) 116 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.name, has_token=has_token, kwargs=kwargs) --> 118 return fn(args, **kwargs)
File ~/.local/lib/python3.8/site-packages/huggingface_hub/file_download.py:1492, in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, local_dir, local_dir_use_symlinks, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, token, local_files_only, legacy_cache_layout, endpoint) 1490 _check_disk_space(expected_size, local_dir) -> 1492 http_get( 1493 url_to_download, 1494 temp_file, 1495 proxies=proxies, 1496 resume_size=resume_size, 1497 headers=headers, 1498 expected_size=expected_size, 1499 displayed_filename=filename, 1500 ) 1502 if local_dir is None: ... --> 147 raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'") 149 loaded_attributes = cls.from_json_file(config_file) 150 kwargs = {class_kwargs, loaded_attributes}
ValueError: Can't find 'adapter_config.json' at 'PraveenPandey/mistral_with_peft'
@praveeen719 Hey, I'm also running into the same issue. I tried logging into my hugging face account through command line prompt- hugging-cli login, yet not able to fix the isssue. Were you able to fix it? If so, please let me know. Thanks in advance!
I can't find a solution to this:
python generate.py --load_8bit --base_model 'decapoda-research/llama-7b-hf' --lora_weights 'tloen/alpaca-lora-7b'
===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /home/mmm/anaconda3/envs/alpaca/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117_nocublaslt.so CUDA SETUP: CUDA runtime path found: /home/mmm/anaconda3/envs/alpaca/lib/libcudart.so.11.0 CUDA SETUP: Highest compute capability among GPUs detected: 6.1 CUDA SETUP: Detected CUDA version 117 /home/mmm/anaconda3/envs/alpaca/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU! warn(msg) CUDA SETUP: Loading binary /home/mmm/anaconda3/envs/alpaca/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda117_nocublaslt.so... The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. The class this function is called from is 'LlamaTokenizer'. Loading checkpoint shards: 100%|█████████████████████████████████| 33/33 [00:15<00:00, 2.10it/s] Traceback (most recent call last): File "/home/mmm/anaconda3/envs/alpaca/lib/python3.10/site-packages/peft/utils/config.py", line 106, in from_pretrained config_file = hf_hub_download( File "/home/mmm/anaconda3/envs/alpaca/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) TypeError: hf_hub_download() got an unexpected keyword argument 'torch_dtype'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/media/mmm/DATADRIVE5/osint/alpaca-lora/generate.py", line 218, in
fire.Fire(main)
File "/home/mmm/anaconda3/envs/alpaca/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/mmm/anaconda3/envs/alpaca/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/mmm/anaconda3/envs/alpaca/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, kwargs)
File "/media/mmm/DATADRIVE5/osint/alpaca-lora/generate.py", line 48, in main
model = PeftModel.from_pretrained(
File "/home/mmm/anaconda3/envs/alpaca/lib/python3.10/site-packages/peft/peft_model.py", line 169, in from_pretrained
PeftConfig.from_pretrained(model_id, subfolder=kwargs.get("subfolder", None), kwargs).peft_type
File "/home/mmm/anaconda3/envs/alpaca/lib/python3.10/site-packages/peft/utils/config.py", line 110, in from_pretrained
raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'")
ValueError: Can't find 'adapter_config.json' at 'tloen/alpaca-lora-7b'