Open andreyvelich opened 2 weeks ago
/assign @saileshd1402
We are experimenting with some PyTorch-native and Transformers APIs to design this Trainer.
@andreyvelich: GitHub didn't allow me to assign the following users: saileshd1402.
Note that only kubeflow members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
/assign
As part of Kubeflow Training V2 work, we should design and implement custom Trainer to fine-tune LLMs that we are planning to support via TrainingRuntimes in Kubeflow upstream.
We should discuss whether we should use native PyTorch APIs or HuggingFace Transformers in the LLM Trainer implementation.
The Trainer should allow users to configure LoRA, QLoRA, FSDP, and other important configurations.
Useful resources:
Part of: https://github.com/kubeflow/training-operator/issues/2170
cc @saileshd1402 @deepanker13 @kubeflow/wg-training-leads
Love this feature?
Give it a 👍 We prioritize the features with most 👍