issues
search
kbressem
/
medAlpaca
LLM finetuned for medical question answering
GNU General Public License v3.0
491
stars
57
forks
source link
HP Tuning
#3
Open
kbressem
opened
1 year ago
kbressem
commented
1 year ago
tune the LR
increase val batch size
tune dropout (0.2 in Instruct GPT) |
most important
tune --lora_r 8 (rank, the bigger, the heavier the lora is (more params to tune))
maybe 16
tune --lora_alpha 16 (the smaller it is, the bigger the retraining amount)
less epochs (2-3, higher batch size)