CoinCheung / gdGPT

Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.
Apache License 2.0
90 stars 8 forks source link

Questions about TiedLayerSpec #31

Open josephwong14wkh opened 4 months ago

josephwong14wkh commented 4 months ago

Do you know how to use TiedLayerSpec? I want to finetune whisper large v2 using multiple GPU (single node). Embedding layer is used before the transformer decoder and after the transformer layer. According to the documentation, the embedding layer should be wrapped by TiedLayerSpec. But i don't know the working principle of TiedLayerSpec. After wrapping the embedding layer into TiedLayerSpec, how deepspeed reuse the layer at the end of transformer decoder or how should i implement it to let deepspeed to do so. There is too little documentation and explaination on TiedLayerSpec, hope someone can help me. Thank you!