Facico / Chinese-Vicuna

Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
https://github.com/Facico/Chinese-Vicuna
Apache License 2.0
4.14k stars 425 forks source link

运行generate.py报错:huggingface_hub.utils._validators.HFValidationError #31

Closed Harpsichord1207 closed 1 year ago

Harpsichord1207 commented 1 year ago

python generate.py,输出如下:

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /home/ubuntu/CIGVicuna/venv/lib/python3.8/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
Traceback (most recent call last):
  File "generate.py", line 331, in <module>
    tokenizer = LlamaTokenizer.from_pretrained(args.model_path)
  File "/home/ubuntu/CIGVicuna/venv/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1770, in from_pretrained
    resolved_vocab_files[file_id] = cached_file(
  File "/home/ubuntu/CIGVicuna/venv/lib/python3.8/site-packages/transformers/utils/hub.py", line 409, in cached_file
    resolved_file = hf_hub_download(
  File "/home/ubuntu/CIGVicuna/venv/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 112, in _inner_fn
    validate_repo_id(arg_value)
  File "/home/ubuntu/CIGVicuna/venv/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 160, in validate_repo_id
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/model/13B_hf'. Use `repo_type` argument if needed.
Facico commented 1 year ago

@Harpsichord1207 你可以使用下面命令,将model_path 和lora_path 指定一下: python generate.py --model_path decapoda-research/llama-7b-hf --lora_path Facico/Chinese-Vicuna-lora-7b-3epoch-belle-and-guanaco --use_local 0

Harpsichord1207 commented 1 year ago

@Facico OK了,谢谢,我以为里面默认的参数也可以运行。 另外请问一下:

  1. 您给的示例命令里的Facico/Chinese-Vicuna-lora-7b-3epoch-belle-and-guanacolora-Vicuna/checkpoint-final是一样的吗?如果不是,哪个效果好呢?
  2. 我用lora-Vicuna/checkpoint-final,上游模型选择13B来运行,会报错,目前的checkpoint都是只能在7B下运行是吗
  3. generate.pyinteraction.py有什么区别

谢谢~

Facico commented 1 year ago

里面默认参数在开发的过程中可能会换成一些本地路径,可能有时候没注意修改就传上来了,也许有时候不能直接用