Facico / Chinese-Vicuna

Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
https://github.com/Facico/Chinese-Vicuna
Apache License 2.0
4.14k stars 422 forks source link

bash generate.sh FAILED with AttributeError: 'NoneType' object has no attribute 'eval' #126

Closed SeekPoint closed 1 year ago

SeekPoint commented 1 year ago

(gh_Chinese-Vicuna) ub2004@ub2004-B85M-A0:~/llm_dev/Chinese-Vicuna$ bash generate.sh

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

/home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/home/ub2004/anaconda3/envs/gh_Chinese-Vicuna/lib')} warn(msg) /home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /home/ub2004/anaconda3/envs/gh_Chinese-Vicuna did not contain libcudart.so as expected! Searching further paths... warn(msg) /home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('local/ub2004-B85M-A0'), PosixPath('@/tmp/.ICE-unix/2101,unix/ub2004-B85M-A0')} warn(msg) /home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/etc/xdg/xdg-ubuntu')} warn(msg) /home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('1'), PosixPath('0')} warn(msg) /home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/org/gnome/Terminal/screen/c497c91f_43c9_4b9e_85ff_17a9c1e34b4a')} warn(msg) CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 6.1 CUDA SETUP: Detected CUDA version 117 /home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU! warn(msg) CUDA SETUP: Loading binary /home/ub2004/.local/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda117_nocublaslt.so... Namespace(lora_path='Chinese-Vicuna/Chinese-Vicuna-lora-7b-belle-and-guanaco', model_path='decapoda-research/llama-7b-hf', use_local=0, use_typewriter=1) The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization. The tokenizer class you load from this checkpoint is 'LLaMATokenizer'. The class this function is called from is 'LlamaTokenizer'. Chinese-Vicuna/Chinese-Vicuna-lora-7b-belle-and-guanaco/adapter_model.bin Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:11<00:00, 2.94it/s] Downloading (…)/adapter_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 370/370 [00:00<00:00, 79.0kB/s] Downloading adapter_model.bin: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16.8M/16.8M [00:01<00:00, 10.3MB/s] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /home/ub2004/llm_dev/Chinese-Vicuna/generate.py:110 in │ │ │ │ 107 if not LOAD_8BIT: │ │ 108 │ model.half() # seems to fix bugs for some users. │ │ 109 │ │ ❱ 110 model.eval() │ │ 111 if torch.version >= "2" and sys.platform != "win32": │ │ 112 │ model = torch.compile(model) │ │ 113 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ AttributeError: 'NoneType' object has no attribute 'eval' (gh_Chinese-Vicuna) ub2004@ub2004-B85M-A0:~/llm_dev/Chinese-Vicuna$

Facico commented 1 year ago

same issue as these,you can fix the peft's commit hash as our requirements.txt