Closed michaelmidura closed 1 week ago
finetune
works on b2974 (1b1e27cb49158123ef4902aa41eb368c9e76e6a1)
finetune
broken after b2976 (d48c88cbd563b6cf0ce972e2f56796896e240736)
Don't use --use-flash
in the command line args
Don't use
--use-flash
in the command line args
thx, 3030 run USING : --no-flash
This issue was closed because it has been inactive for 14 days since being marked as stale.
I have been finetuning a model based on
Meta-Llama-3-8B
usingfinetune
. The model was downloaded from themeta-llama
Hugging Face. I am running macOS on Apple Silicon. I recently updated llama.cpp to b2989 (27891f6db03de6e3fd5941983838c29bef253352) which broke finetune.Steps to reproduce:
make clean
andmake
python convert-hf-to-gguf.py models/Meta-Llama-3-8B/
./finetune --model-base models/Meta-Llama-3-8B/ggml-model-f16.gguf --lora-out lora-test-0x00001.bin --train-data shakespeare.txt --threads 6 --adam-iter 30 --batch 4 --ctx 64 --save-every 10 --use-checkpointing
Output: