johnsmith0031 / alpaca_lora_4bit

MIT License
533 stars 84 forks source link

Make sure to save your model with the `save_pretrained` method. #86

Open kachook opened 1 year ago

kachook commented 1 year ago

Getting next warning when trying to load newer safetensors format model koala-13B-4bit-128g.safetensors:

Loading TheBloke_koala-13B-GPTQ-4bit-128g...
Warning: applying the monkey patch for using LoRAs in 4-bit mode.
It may cause undefined behavior outside its intended scope.
Loading Model ...
The safetensors archive passed at models/TheBloke_koala-13B-GPTQ-4bit-128g/koala-13B-4bit-128g.safetensors does not contain metadata. Make sure to save your model with the `save_pretrained` method. Defaulting to 'pt' metadata.

Model was generated with following command: python3 llama.py koala-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors koala-13B-4bit-128g.safetensors