duowuyms / NetLLM

MIT License
79 stars 8 forks source link

ImportError: cannot import name 'get_full_repo_name' from 'huggingface_hub' #2

Closed mfdj2002 closed 3 months ago

mfdj2002 commented 3 months ago

python run_plm.py --adapt --grad-accum-steps 32 --plm-type llama --plm-size base --rank 128 --device cuda:0 --lr 0.0001 --warmup-steps 2000 --num-epochs 80 --eval-per-epoch 2

Traceback (most recent call last): File "run_plm.py", line 23, in from plm_special.models.low_rank import peft_model File "/data/workspace/fujingkai/NetLLM/adaptive_bitrate_streaming/plm_special/models/low_rank.py", line 3, in from peft import LoraConfig, get_peft_model, TaskType, get_peft_model_state_dict File "/data/workspace/fujingkai/.conda/envs/abr_netllm/lib/python3.8/site-packages/peft/init.py", line 22, in from .auto import ( File "/data/workspace/fujingkai/.conda/envs/abr_netllm/lib/python3.8/site-packages/peft/auto.py", line 21, in from transformers import ( File "/data/workspace/fujingkai/.conda/envs/abr_netllm/lib/python3.8/site-packages/transformers/init.py", line 26, in from . import dependency_versions_check File "/data/workspace/fujingkai/.conda/envs/abr_netllm/lib/python3.8/site-packages/transformers/dependency_versions_check.py", line 16, in from .utils.versions import require_version, require_version_core File "/data/workspace/fujingkai/.conda/envs/abr_netllm/lib/python3.8/site-packages/transformers/utils/init.py", line 18, in from huggingface_hub import get_full_repo_name # for backward compatibility ImportError: cannot import name 'get_full_repo_name' from 'huggingface_hub' (/data/workspace/fujingkai/.conda/envs/abr_netllm/lib/python3.8/site-packages/huggingface_hub/init.py)

pip show huggingface_hub Name: huggingface-hub Version: 0.17.3 ...

I installed all dependencies following the readme.

I tried solutions in https://github.com/comfyanonymous/ComfyUI/issues/2055, but none of them worked for me. I also tried using another version of huggingface_hub, but tokenizers 0.14.1 requires huggingface_hub<0.18,>=0.16.4 and transformers 4.34.1 (specified by the requirements you mentioned) requires tokenizers<0.15,>=0.14.

ReamonYim commented 3 months ago

a little difference: python run_plm.py --test --plm-type llama --plm-size base --rank 128 --device cuda:0 --model-dir data/ft_plms/try_llama2_7b Traceback (most recent call last): File "run_plm.py", line 23, in from plm_special.models.low_rank import peft_model File "D:\Project\NetLLM\NetLLM\adaptive_bitrate_streaming\plm_special\models\low_rank.py", line 3, in from peft import LoraConfig, get_peft_model, TaskType, get_peft_model_state_dict File "D:\Downloads\anaconda3\envs\abr_netllm\lib\site-packages\peft__init.py", line 22, in from .auto import ( File "D:\Downloads\anaconda3\envs\abr_netllm\lib\site-packages\peft\auto.py", line 30, in from .config import PeftConfig File "D:\Downloads\anaconda3\envs\abr_netllm\lib\site-packages\peft\config.py", line 24, in from .utils import CONFIG_NAME, PeftType, TaskType File "D:\Downloads\anaconda3\envs\abr_netllm\lib\site-packages\peft\utils__init.py", line 22, in from .other import ( File "D:\Downloads\anaconda3\envs\abr_netllm\lib\site-packages\peft\utils\other.py", line 20, in import accelerate File "D:\Downloads\anaconda3\envs\abr_netllm\lib\site-packages\accelerate\init__.py", line 16, in from .accelerator import Accelerator File "D:\Downloads\anaconda3\envs\abr_netllm\lib\site-packages\accelerate\accelerator.py", line 34, in from huggingface_hub import split_torch_state_dict_into_shards ImportError: cannot import name 'split_torch_state_dict_into_shards' from 'huggingface_hub' (D:\Downloads\anaconda3\envs\abr_netllm\lib\site-packages\huggingface_hub\init__.py)

I installed all dependencies following the readme.

pip show huggingface_hub Name: huggingface-hub Version: 0.17.3

I tried the following solutions:

Everything I tried failed😭

duowuyms commented 3 months ago

Hi! Thanks for your messages.

We did not meet this error. Did you run our codes on linux?

FYI, here is the huggingface_hub information on our testbed: Name: huggingface-hub Version: 0.17.3 Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub Home-page: https://github.com/huggingface/huggingface_hub Author: Hugging Face, Inc. Author-email: julien@huggingface.co License: Apache Location: xxx Requires: filelock, fsspec, packaging, pyyaml, requests, tqdm, typing-extensions Required-by: accelerate, datasets, tokenizers, transformers

mfdj2002 commented 3 months ago

Hi! Thanks for your response. We eventually solved the problem using the following environment:

$ cat netllm.yaml name: abr_netllm channels: