Open josephjclark opened 9 months ago
In the bun server refactor, I am for the moment keeping the fine tuning stuff up in the root, in a folder called fine_tuning
.
Note that the old openfn_llama
folder contains fine tuning code for various models - not just llama.
I'm a bit lost on where we got to with fine tuning, but this issue still stands....
The
openfn_llama
service looks like it was set up to fine tune a llama model. Which is fine. But it also contains agpt_finetune.py
, so it's actually a general fine tuning service now.Is it even a service? Is it not a one-shot script to trigger a round of fine tuning? There is a run script with a
generate_code
endpoint. But should not theinference
service expose the model fine tuned by theopenfn_llama
command?I don't know.
Here are some steps to consider:
openfn_llama
openfn_llama
tofinetuning
ortraining
openfn_llama
service entirelyopenfn_llama
out of/services
I do really like the idea of moving this to another repo and enabling
llama_ft
to be amodel
type argument on the codegen services.