Closed artek0chumak closed 1 month ago
Issue ENG-4853
Add LoRA training support for finetuning jobs.
Some cli examples:
together fine-tuning create --training-file "${FILE_ID}" --model "meta-llama/Meta-Llama-3-8B" --wandb-api-key "${WANDB_API_KEY}" --lora --lora-r 8
Description of the new parameters:
--lora
--lora-r
--lora-dropout
--lora-alpha
--lora-trainable-modules
all-linear
Issue ENG-4853
Add LoRA training support for finetuning jobs.
Some cli examples:
Description of the new parameters:
--lora
(bool) -- general flag for enabling LoRA training--lora-r
(int) -- rank for LoRA adapter weights--lora-dropout
(float) -- dropout value for LoRA adapter training--lora-alpha
(int) -- alpha value for LoRA adapter training--lora-trainable-modules
(str) -- LoRA adapters' trainable modules, separated by a comma. To use trainable modules, useall-linear