===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /root/miniconda3/envs/cvicuna did not contain libcudart.so as expected! Searching further paths...
warn(msg)
/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/nvidia/lib64'), PosixPath('/usr/local/nvidia/lib')}
warn(msg)
/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: /usr/local/nvidia/lib:/usr/local/nvidia/lib64 did not contain libcudart.so as expected! Searching further paths...
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 6.1
CUDA SETUP: Detected CUDA version 114
/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!
warn(msg)
CUDA SETUP: Loading binary /root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda114_nocublaslt.so...
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
./lora-Vicuna/checkpoint-3000/adapter_model.bin
./lora-Vicuna/checkpoint-3000/pytorch_model.bin
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 33/33 [00:19<00:00, 1.68it/s]
Traceback (most recent call last):
File "/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/peft-0.3.0.dev0-py3.8.egg/peft/utils/config.py", line 105, in from_pretrained
File "/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/huggingface_hub-0.14.0rc1-py3.8.egg/huggingface_hub/utils/_validators.py", line 112, in _inner_fn
validate_repo_id(arg_value)
File "/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/huggingface_hub-0.14.0rc1-py3.8.egg/huggingface_hub/utils/_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './lora-Vicuna/checkpoint-3000'. Use `repo_type` argument if needed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "chat.py", line 62, in <module>
model = SteamGenerationMixin.from_pretrained(
File "/root/Chinese-Vicuna/utils.py", line 670, in from_pretrained
config = LoraConfig.from_pretrained(model_id)
File "/root/miniconda3/envs/cvicuna/lib/python3.8/site-packages/peft-0.3.0.dev0-py3.8.egg/peft/utils/config.py", line 107, in from_pretrained
ValueError: Can't find 'adapter_config.json' at './lora-Vicuna/checkpoint-3000'
(cvicuna) root@adfe6fe7295f:~/Chinese-Vicuna# python3 chat.py --model_path ./llama-7b-hf/
在docker容器运行指令:
返回错误: