johnsmith0031 / alpaca_lora_4bit

MIT License
533 stars 84 forks source link

What huggingface link to download the models from? #88

Open psych0v0yager opened 1 year ago

psych0v0yager commented 1 year ago

I ran the following script on the home page python finetune.py ./data.txt \ --ds_type=txt \ --lora_out_dir=./test/ \ --llama_q4_config_dir=./llama-7b-4bit/ \ --llama_q4_model=./llama-7b-4bit.pt \ --mbatch_size=1 \ --batch_size=2 \ --epochs=3 \ --lr=3e-4 \ --cutoff_len=256 \ --lora_r=8 \ --lora_alpha=16 \ --lora_dropout=0.05 \ --warmup_steps=5 \ --save_steps=50 \ --save_total_limit=3 \ --logging_steps=5 \ --groupsize=-1 \ --v1 \ --xformers \ --backend=cuda

And received the following error. Traceback (most recent call last): File "/home/username/anaconda3/envs/alpacalora4b/lib/python3.10/site-packages/transformers/configuration_utils.py", line 629, in _get_config_dict resolved_config_file = cached_file( File "/home/username/anaconda3/envs/alpacalora4b/lib/python3.10/site-packages/transformers/utils/hub.py", line 409, in cached_file resolved_file = hf_hub_download( File "/home/username/anaconda3/envs/alpacalora4b/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 112, in _inner_fn validate_repo_id(arg_value) File "/home/username/anaconda3/envs/alpacalora4b/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 160, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './llama-7b-4bit/'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/username/alpaca_lora_4bit/finetune.py", line 61, in model, tokenizer = load_llama_model_4bit_low_ram(ft_config.llama_q4_config_dir, File "/home/username/alpaca_lora_4bit/autograd_4bit.py", line 194, in load_llama_model_4bit_low_ram config = LlamaConfig.from_pretrained(config_path) File "/home/username/anaconda3/envs/alpacalora4b/lib/python3.10/site-packages/transformers/configuration_utils.py", line 547, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, kwargs) File "/home/username/anaconda3/envs/alpacalora4b/lib/python3.10/site-packages/transformers/configuration_utils.py", line 574, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, kwargs) File "/home/username/anaconda3/envs/alpacalora4b/lib/python3.10/site-packages/transformers/configuration_utils.py", line 650, in _get_config_dict raise EnvironmentError( OSError: Can't load the configuration of './llama-7b-4bit/'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './llama-7b-4bit/' is the correct path to a directory containing a config.json file

Do I need to download the weights locally on my machine? If so which specific config file and weights need to be downloaded for this repo?

Thanks

johnsmith0031 commented 1 year ago

Maybe you should download them manually from huggingface for somewhere else. Mine is downloaded via a torrent.

ra-MANUJ-an commented 1 year ago

try using this:

!git clone https://github.com/oobabooga/text-generation-webui.git
cd text-generation-webui
!python download-model.py models/model-name