FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
12.75k
stars
1.81k
forks
source link
Reproducing sentiment finetuning train_lora extremely slow #146
Open
vikigenius opened 6 months ago
I am trying to reproduce the finetuning for the fingpt-sentiment_llama2-13b_lora
The table claims we can do this in just a single RTX 3090 within a day. I am using a L4 GPU instead.
I downloaded the models to base_models and the dataset to data correctly
I used the script like this
I got an OOM.
So i set the
load_in_8_bit=True
But I am getting extremely slow fine tuning speed A single epoch is estimated to take 2 days.