CoinCheung / gdGPT

Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
Apache License 2.0
90 stars 8 forks source link

can it make Lora sft? #24

Open ReverseSystem001 opened 9 months ago

ReverseSystem001 commented 9 months ago

Limited by graphics card devices. For most people, Lora is the only way to fine-tuning. can it make lora sft?

CoinCheung commented 9 months ago

Hi, thanks for paying attention to this !!

This repo is currently designed for full parameter finetuning, but lora freezes most of the parameters. Since they contradict with each other, currently this repo does not support lora.

This repo bases on pipeline method, which allows you to train your model with DP + PP (megatronLM is DP + PP + TP, the so-called 3D layout). This is faster and requires less memory than zero-based methods when there is not so many gpus (100+). You can train a 7b or 13b model on a server with 8 gpus (24G), which I believe many companies can afford to.

ReverseSystem001 commented 9 months ago

great job

---Original--- From: @.> Date: Tue, Nov 7, 2023 09:36 AM To: @.>; Cc: @.**@.>; Subject: Re: [CoinCheung/gdGPT] can it make Lora sft? (Issue #24)

Hi, thanks for paying attention to this !!

This repo is currently designed for full parameter finetuning, but lora freezes most of the parameters. Since they contradict with each other, currently this repo does not support lora.

This repo bases on pipeline method, which allows you to train your model with DP + PP (megatronLM is DP + PP + TP, the so-called 3D layout). This is faster and requires less memory than zero-based methods when there is not so many gpus (100+). You can train a 7b or 13b model on a server with 8 gpus (24G), which I believe many companies can afford to.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>