Closed 0xSage closed 5 months ago
Fine tuning please.
Is OpenAI compatible is strictly needed in this usecase
@tikikun > Is OpenAI compatible is strictly needed in this usecase
I think yes, we will need a way to sync the state whatsoever, then I use the biggest platform there for it.
Just to add detail for the input dataset.
system | instruction | response |
---|---|---|
train
split
so we can add a small handle train_test_split (test = 0.1) for usersTry Axolotl, It'll make everything simpler. Jan should just generate a config file and start the finetuning.
oudated
HackMD
Motivation
Create your own Assistant
)Specs
training engine
andapi
in Python runtime in order to reuse existing Python ecosystem.trainingEngine
class which helps users to fine tune on CPU, NVIDIA GPU or Apple MLXhttp://studio.jan.ai/v1/fine_tuning/jobs
ft_job
)http://studio.jan.ai/v1/fine_tuning/jobs
-> LIST[ft_job]http://studio.jan.ai/v1/fine_tuning/jobs/<:ft_id> ->
ft_job`http://studio.jan.ai/v1/fine_tuning/jobs/<:id>/cancel
->ft_job
(status: cancelled)Designs
Figma
Tasklist
similar to unsloth but use MLX
(optional)Not in Scope
Appendix